Current Search: Research Repository (x) » Mathematics (x)
Search results
Pages
 Title
 Analysis of Orientational Restraints in SolidState Nuclear Magnetic Resonance with Applications to Protein Structure Determination.
 Creator

Achuthan, Srisairam, Quine, John R., Cross, Timothy A., Sumners, DeWitt, Bertram, Richard, Department of Mathematics, Florida State University
 Abstract/Description

Of late, pathbreaking advances are taking place and flourishing in the field of solidstate Nuclear Magnetic Resonance (ssNMR)spectroscopy. One of the major applications of ssNMR techniques is to high resolution threedimensional structures of biological molecules like the membrane proteins. An explicit example of this is PISEMA (Polarization Inversion Spin Exchange at Magic Angle). This dissertation studies and analyzes the use of the orientational restraints in general, and particularly...
Show moreOf late, pathbreaking advances are taking place and flourishing in the field of solidstate Nuclear Magnetic Resonance (ssNMR)spectroscopy. One of the major applications of ssNMR techniques is to high resolution threedimensional structures of biological molecules like the membrane proteins. An explicit example of this is PISEMA (Polarization Inversion Spin Exchange at Magic Angle). This dissertation studies and analyzes the use of the orientational restraints in general, and particularly the restraints measured through PISEMA. Here, we have applied our understanding of orientational restraints to briefly investigate the structure of Amantadine bound M2TMD, a membrane protein in Influenza A Virus. We model the protein backbone structure as a discrete curve in space with atoms represented by vertices and covalent bonds connecting them as the edges. The oriented structure of this curve with respect to an external vector is emphasized. The map from the surface of the unit sphere to the PISEMA frequency plane is examined in detail. The image is a powder pattern in the frequency plane. A discussion of the resulting image is provided. Solutions to PISEMA equations lead to multiple orientations for the magnetic field vector for a given point in the frequency plane. These are duly captured by sign degeneracies for the vector coordinates. The intensity of NMR powder patterns is formulated in terms of a probability density function for 1d spectra and a joint probability density function for the 2d spectra. The intensity analysis for 2d spectra is found to be rather helpful in addressing the robustness of the PISEMA data. To build protein structures by gluing together diplanes, certain necessary conditions have to be met. We formulate these as continuity conditions to be realized for diplanes. The number of oriented protein structures has been enumerated in the degeneracy framework for diplanes. Torsion angles are expressed via sign degeneracies. For aligned protein samples, the PISA wheel approach to modeling the protein structure is adopted. Finally, an atomic model of the monomer structure of M2TMD with Amantadine has been elucidated based on PISEMA orientational restraints. This is a joint work with Jun Hu and Tom Asbury. The PISEMA data was collected by Jun Hu and the molecular modeling was performed by Tom Asbury.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd0109
 Format
 Thesis
 Title
 Discontinuous Galerkin Spectral Element Approximations on Moving Meshes for Wave Scattering from Reflective Moving Boundaries.
 Creator

AcostaMinoli, Cesar Augusto, Kopriva, David, Srivastava, Anuj, Hussaini, M. Yousuﬀ, Sussman, Mark, Ewald, Brian, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation develops and evaluates a high order method to compute wave scattering from moving boundaries. Specifically, we derive and evaluate a Discontinuous Galerkin Spectral elements method (DGSEM) with Arbitrary Lagrangian Eulerian (ALE) mapping to compute conservation laws on moving meshes and numerical boundary conditions for Maxwell's equations, the linear Euler equations and the nonlinear Euler gasdynamics equations to calculate the numerical flux on reflective moving...
Show moreThis dissertation develops and evaluates a high order method to compute wave scattering from moving boundaries. Specifically, we derive and evaluate a Discontinuous Galerkin Spectral elements method (DGSEM) with Arbitrary Lagrangian Eulerian (ALE) mapping to compute conservation laws on moving meshes and numerical boundary conditions for Maxwell's equations, the linear Euler equations and the nonlinear Euler gasdynamics equations to calculate the numerical flux on reflective moving boundaries. We use one of a family of explicit time integrators such as AdamsBashforth or low storage explicit RungeKutta. The approximations preserve the discrete metric identities and the Discrete Geometric Conservation Law (DGCL) by construction. We present timestep refinement studies with moving meshes to validate the moving mesh approximations. The test problems include propagation of an electromagnetic gaussian plane wave, a cylindrical pressure wave propagating in a subsonic flow, and a vortex convecting in a uniform inviscid subsonic flow. Each problem is computed on a timedeforming mesh with three methods used to calculate the mesh velocities: From exact differentiation, from the integration of an acceleration equation, and from numerical differentiation of the mesh position. In addition, we also present four numerical examples using Maxwell's equations, one example using the linear Euler equations and one more example using nonlinear Euler equations to validate these approximations. These are: reflection of light from a constantly moving mirror, reflection of light from a constantly moving cylinder, reflection of light from a vibrating mirror, reflection of sound in linear acoustics and dipole sound generation by an oscillating cylinder in an inviscid flow.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd0111
 Format
 Thesis
 Title
 Deterministic and Stochastic Aspects of Data Assimilation.
 Creator

Akella, Santharam, Navon, Ionel Michael, O'Brien, James J., Erlebacher, Gordon, Wang, Qi, Sussman, Mark, Department of Mathematics, Florida State University
 Abstract/Description

The principles of optimal control of distributed parameter systems are used to derive a powerful class of numerical methods for solutions of inverse problems, called data assimilation (DA) methods. Using these DA methods one can efficiently estimate the state of a system and its evolution. This information is very crucial for achieving more accurate long term forecasts of complex systems, for instance, the atmosphere. DA methods achieve their goal of optimal estimation via combination of all...
Show moreThe principles of optimal control of distributed parameter systems are used to derive a powerful class of numerical methods for solutions of inverse problems, called data assimilation (DA) methods. Using these DA methods one can efficiently estimate the state of a system and its evolution. This information is very crucial for achieving more accurate long term forecasts of complex systems, for instance, the atmosphere. DA methods achieve their goal of optimal estimation via combination of all available information in the form of measurements of the state of the system and a dynamical model which describes the evolution of the system. In this dissertation work, we study the impact of new nonlinear numerical models on DA. High resolution advection schemes have been developed and studied to model propagation of flows involving sharp fronts and shocks. The impact of high resolution advection schemes in the framework of inverse problem solution/ DA has been studied only in the context of linear models. A detailed study of the impact of various slope limiters and the piecewise parabolic method (PPM) on DA is the subject of this work. In 1D we use a nonlinear viscous Burgers equation and in 2D a global nonlinear shallow water model has been used. The results obtained show that using the various advection schemes consistently improves variational data assimilation (VDA) in the strong constraint form, which does not include model error. However, the cost functional included efficient and physically meaningful construction of the background cost functional term, J_b, using balance and diffusion equation based correlation operators. This was then followed by an indepth study of various approaches to model the systematic component of model error in the framework of a weak constraint VDA. Three simple forms, decreasing, invariant, and exponentially increasing in time forms of evolution of model error were tested. The inclusion of model error provides a substantial reduction in forecasting errors, in particular the exponentially increasing form in conjunction with the piecewise parabolic high resolution advection scheme was found to provide the best results. Results obtained in this work can be used to formulate sophisticated forms of model errors, and could lead to implementation of new VDA methods using numerical weather prediction models which involve high resolution advection schemes such as the van Leer slope limiters and the PPM.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd0145
 Format
 Thesis
 Title
 Peridynamic Multiscale Models for the Mechanics of Materials: Constitutive Relations, Upscaling from Atomistic Systems, and Interface Problems.
 Creator

Seleson, Pablo D, Gunzburger, Max, Rikvold, Per Arne, ElAzab, Anter, Peterson, Janet, Shanbhag, Sachin, Lehoucq, Richard B., Parks, Michael L., Department of Scientific...
Show moreSeleson, Pablo D, Gunzburger, Max, Rikvold, Per Arne, ElAzab, Anter, Peterson, Janet, Shanbhag, Sachin, Lehoucq, Richard B., Parks, Michael L., Department of Scientific Computing, Florida State University
Show less  Abstract/Description

This dissertation focuses on the non local continuum peridynamics model for the mechanics of materials, related constitutive models, its connections to molecular dynamics and classical elasticity, and its multiscale and multimodel capabilities. A more generalized role is defined for influence functions in the statebased peridynamic model which allows for the strength of non local interactions to be modulated. This enables the connection between different peridynamic constitutive models,...
Show moreThis dissertation focuses on the non local continuum peridynamics model for the mechanics of materials, related constitutive models, its connections to molecular dynamics and classical elasticity, and its multiscale and multimodel capabilities. A more generalized role is defined for influence functions in the statebased peridynamic model which allows for the strength of non local interactions to be modulated. This enables the connection between different peridynamic constitutive models, establishing a hierarchy that reveals that some models are special cases of others. Furthermore, this allows for the modulation of the strength of non local interactions, even for a fixed radius of interactions between material points in the peridynamics model. The multiscale aspect of peridynamics is demonstrated through its connections to molecular dynamics. Using higherorder gradient models, it is shown that peridynamics can be viewed as an upscaling of molecular dynamics, preserving the relevant dynamics under appropriate choices of length scales. The statebased peridynamic model is shown to be appropriate for the description of multiscale and multimodel systems. A formulation for nonlocal interface problems involving scalar fields is presented, and derivations of non local transmission conditions are derived. Specializations that describe local, non local, and local/non local transmission conditions are considered. Moreover, the convergence of the non local transmission conditions to their classical local counterparts is shown. In all cases, results are illustrated by numerical experiments.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd0273
 Format
 Thesis
 Title
 QuasiMonte Carlo and Genetic Algorithms with Applications to Endogenous Mortgage Rate Computation.
 Creator

Shah, Manan, Okten, Giray, Goncharov, Yevgeny, Srinivasan, Ashok, Bellenot, Steve, Case, Bettye Anne, Kercheval, Alec, Kopriva, David, Nichols, Warren, Department of Mathematics...
Show moreShah, Manan, Okten, Giray, Goncharov, Yevgeny, Srinivasan, Ashok, Bellenot, Steve, Case, Bettye Anne, Kercheval, Alec, Kopriva, David, Nichols, Warren, Department of Mathematics, Florida State University
Show less  Abstract/Description

In this dissertation, we introduce a genetic algorithm approach to estimate the star discrepancy of a point set. This algorithm allows for the estimation of the star discrepancy in dimensions larger than seven, something that could not be done adequately by other existing methods. Then, we introduce a class of random digitpermutations for the Halton sequence and show that these permutations yield comparable or better results than their deterministic counterparts in any number of dimensions...
Show moreIn this dissertation, we introduce a genetic algorithm approach to estimate the star discrepancy of a point set. This algorithm allows for the estimation of the star discrepancy in dimensions larger than seven, something that could not be done adequately by other existing methods. Then, we introduce a class of random digitpermutations for the Halton sequence and show that these permutations yield comparable or better results than their deterministic counterparts in any number of dimensions for the test problems considered. Next, we use randomized quasiMonte Carlo methods to numerically solve a onefactor mortgage model expressed as a stochastic fixedpoint problem. Finally, we show that this mortgage model coincides with and is computationally faster than Citigroup's MOATS model, which is based on a binomial tree approach.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0297
 Format
 Thesis
 Title
 Modeling the Folding Pattern of the Cerebral Cortex.
 Creator

Striegel, Deborah A., Hurdal, Monica K., Steinbock, Oliver, Quine, Jack, Sumners, DeWitt, Bertram, Richard, Department of Mathematics, Florida State University
 Abstract/Description

The mechanism for cortical folding pattern formation is not fully understood. Current models represent scenarios that describe pattern formation through local interactions and one recent model is the intermediate progenitor model. The intermediate progenitor (IP) model describes a local chemicallydriven scenario, where an increase in intermediate progenitor cells in the subventricular zone (an area surrounding the lateral ventricles) correlates to gyral formation. This dissertation presents...
Show moreThe mechanism for cortical folding pattern formation is not fully understood. Current models represent scenarios that describe pattern formation through local interactions and one recent model is the intermediate progenitor model. The intermediate progenitor (IP) model describes a local chemicallydriven scenario, where an increase in intermediate progenitor cells in the subventricular zone (an area surrounding the lateral ventricles) correlates to gyral formation. This dissertation presents the Global Intermediate Progenitor (GIP) model, a theoretical biological model that uses features of the IP model and further captures global characteristics of cortical pattern formation. To illustrate how global features can effect the development of certain patterns, a mathematical model that incorporates a Turing system is used to examine pattern formation on a prolate spheroidal surface. Pattern formation in a biological system can be studied with a Turing reactiondiffusion system which utilizes characteristics of domain size and shape to predict which pattern will form. The GIP model approximates the shape of the lateral ventricle with a prolate spheroid. This representation allows the capture of a key shape feature, lateral ventricular eccentricity, in terms of the focal distance of the prolate spheroid. A formula relating domain scale and focal distance of a prolate spheroidal surface to specific prolate spheroidal harmonics is developed. This formula allows the prediction of pattern formation with solutions in the form of prolate spheroidal harmonics based on the size and shape of the prolate spheroidal surface. By utilizing this formula a direct correlation between the size and shape of the lateral ventricle, which drives the shape of the ventricular zone, and cerebral cortical folding pattern formation is found. This correlation is illustrated in two different applications: (i) how the location and directionality of the initial cortical folds change with respect to evolutionary development and (ii) how the initial folds change with respect to certain diseases, such as Microcephalia Vera and Megalencephaly Polymicrogyria Polydactyly with Hydrocephalus. The significance of the model, presented in this dissertation, is that it elucidates the consistency of cortical patterns among healthy individuals within a species and addresses interspecies variability based on global characteristics. This model provides a critical piece to the puzzle of cortical pattern formation.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd0394
 Format
 Thesis
 Title
 A Spectral Element Method to Price Single and MultiAsset European Options.
 Creator

Zhu, Wuming, Kopriva, David A., Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec N., Okten, Giray, Wang, Xiaoming, Department of Mathematics, Florida State University
 Abstract/Description

We develop a spectral element method to price European options under the BlackScholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a GaussLobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the nonsmooth initial condition. For options with one asset under...
Show moreWe develop a spectral element method to price European options under the BlackScholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a GaussLobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the nonsmooth initial condition. For options with one asset under the jump diffusion model, the convolution integral is approximated by high order GaussLobatto quadratures. A second order implicit/explicit (IMEX) approximation is used to integrate in time, with the convolution integral integrated explicitly. The use of the IMEX approximation in time means that only a block diagonal, rather than full, system of equations needs to be solved at each time step. For options with two variables, i.e., two assets under the BlackScholes model or one asset under the stochastic volatility model, the domain is subdivided into quadrilateral elements. Within each element, the expansion basis functions are chosen to be tensor products of the Legendre polynomials. Three iterative methods are investigated to solve the system of equations at each time step with the corresponding second order time integration schemes, i.e., IMEX and CrankNicholson. Also, the boundary conditions are carefully studied for the stochastic volatility model. The method is spectrally accurate (exponentially convergent) in space and second order accurate in time for European options under all the three models. Spectral accuracy is observed in not only the solution, but also in the Greeks.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0513
 Format
 Thesis
 Title
 Numerical Methods for Portfolio Risk Estimation.
 Creator

Zhang, Jianke, Kercheval, Alec, Huﬀer, Fred, Gallivan, Kyle, Beaumont, Paul, Nichols, Warren, Department of Mathematics, Florida State University
 Abstract/Description

In portfolio risk management, a global covariance matrix forecast often needs to be adjusted by changing diagonal blocks corresponding to specific submarkets. Unless certain constraints are obeyed, this can result in the loss of positive definiteness of the global matrix. Imposing the proper constraints while minimizing the disturbance of offdiagonal blocks leads to a nonconvex optimization problem in numerical linear algebra called the Weighted Orthogonal Procrustes Problem. We analyze...
Show moreIn portfolio risk management, a global covariance matrix forecast often needs to be adjusted by changing diagonal blocks corresponding to specific submarkets. Unless certain constraints are obeyed, this can result in the loss of positive definiteness of the global matrix. Imposing the proper constraints while minimizing the disturbance of offdiagonal blocks leads to a nonconvex optimization problem in numerical linear algebra called the Weighted Orthogonal Procrustes Problem. We analyze and compare two local minimizing algorithms and offer an algorithm for global minimization. Our methods are faster and more effective than current numerical methods for covariance matrix revision.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd0542
 Format
 Thesis
 Title
 Variance Gamma Pricing of American Futures Options.
 Creator

Yoo, Eunjoo, Nolder, Craig A., Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec N., Quine, Jack, Department of Mathematics, Florida State University
 Abstract/Description

In financial markets under uncertainty, the classical BlackScholes model cannot explain the empirical facts such as fat tails observed in the probability density. To overcome this drawback, during the last decade, Lévy process and stochastic volatility models were introduced to financial modeling. Today crude oil futures markets are highly volatile. It is the purpose of this dissertation to develop a mathematical framework in which American options on crude oil futures contracts are priced...
Show moreIn financial markets under uncertainty, the classical BlackScholes model cannot explain the empirical facts such as fat tails observed in the probability density. To overcome this drawback, during the last decade, Lévy process and stochastic volatility models were introduced to financial modeling. Today crude oil futures markets are highly volatile. It is the purpose of this dissertation to develop a mathematical framework in which American options on crude oil futures contracts are priced more effectively than by current methods. In this work, we use the Variance Gamma process to model the futures price process. To generate the underlying process, we use a random tress method so that we evaluate the option prices at each tree node. Through fifty replications of a random tree, the averaged value is taken as a true option price. Pricing performance using this method is accessed using American options on crude oil commodity contracts from December 2003 to November 2004. In comparison with the Variance Gamma model, we price using the BlackScholes model as well. Over the entire sample period, a positive skewness and high kurtosis, especially in the shortterm options, are observed. In terms of pricing errors, the Variance Gamma process performs better than the BlackScholes model for the American options on crude oil commodities.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0691
 Format
 Thesis
 Title
 A Comparison Study of Principal Component Analysis and Nonlinear Principal Component Analysis.
 Creator

Wu, Rui, Magnan, Jerry F., Bellenot, Steven, Sussman, Mark, Department of Mathematics, Florida State University
 Abstract/Description

In the field of data analysis, it is important to reduce the dimensionality of data, because it will help to understand the data, extract new knowledge from the data, and decrease the computational cost. Principal Component Analysis (PCA) [1, 7, 19] has been applied in various areas as a method of dimensionality reduction. Nonlinear Principal Component Analysis (NLPCA) [1, 7, 19] was originally introduced as a nonlinear generalization of PCA. Both of the methods were tested on various...
Show moreIn the field of data analysis, it is important to reduce the dimensionality of data, because it will help to understand the data, extract new knowledge from the data, and decrease the computational cost. Principal Component Analysis (PCA) [1, 7, 19] has been applied in various areas as a method of dimensionality reduction. Nonlinear Principal Component Analysis (NLPCA) [1, 7, 19] was originally introduced as a nonlinear generalization of PCA. Both of the methods were tested on various artificial and natural datasets sampled from: "F(x) = sin(x) + x", the Lorenz Attractor, and sunspot data. The results from the experiments have been analyzed and compared. Generally speaking, NLPCA can explain more variance than a neural network PCA (NN PCA) in lower dimensions. However, as a result of increasing the dimension, the NLPCA approximation will eventually loss its advantage. Finally, we introduce a new combination of NN PCA and NLPCA, and analyze and compare its performance.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd0704
 Format
 Thesis
 Title
 Combinatorial Type Problems for Triangulation Graphs.
 Creator

Wood, William E., Bowers, Philip, Hawkes, Lois, Bellenot, Steve, Klassen, Eric, Nolder, Craig, Quine, Jack, Department of Mathematics, Florida State University
 Abstract/Description

The main result in this thesis bounds the combinatorial modulus of a ring in a triangulation graph in terms of the modulus of a related ring. The bounds depend only on how the rings are related and not on the rings themselves. This may be used to solve the combinatorial type problem in a variety of situation, most significant in graphs with unbounded degree. Other results regarding the type problem are presented along with several examples illustrating the limits of the results.
 Date Issued
 2006
 Identifier
 FSU_migr_etd0794
 Format
 Thesis
 Title
 Adaptive Spectral Element Methods to Price American Options.
 Creator

Willyard, Matthew, Kopriva, David, Eugenio, Paul, Case, Bettye Anne, Gallivan, Kyle, Nolder, Craig, Okten, Giray, Department of Mathematics, Florida State University
 Abstract/Description

We develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or derefinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre...
Show moreWe develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or derefinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre coefficients and the global error estimator is based on the adjoint problem. One local error estimator uses the rate of decay of the Legendre coefficients to estimate the error. The other local error estimator compares the solution to an estimated solution using fewer Legendre coefficients found by the Tau method. The global error estimator solves the adjoint problem to weight local error estimates to approximate a terminal error functional. Both types of error estimators produce meshes that match expectations by being fine near the early exercise boundary and strike price and coarse elsewhere. The produced meshes also adapt as expected by derefining near the strike price as the solution smooths and staying fine near the moving early exercise boundary. Both types of error estimators also give solutions whose error is within prescribed tolerances. The adjointbased error estimator is more flexible, but costs up to three times as much as using the local error estimate alone. The global error estimator has the advantages of tracking the accumulation of error in time and being able to discount large local errors that do not affect the chosen terminal error functional. The local error estimator is cheaper to compute because the global error estimator has the added cost of solving the adjoint problem.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd0892
 Format
 Thesis
 Title
 A Heuristic Method for a Rostering Problem with the Objective of Equal Accumulated Flying Time.
 Creator

Ye, Xugang, Blumsack, Steve, Bellenot, Steve, Braswell, Robert N., Department of Mathematics, Florida State University
 Abstract/Description

Crew costs are the second largest direct operating cost of airlines next to fuel costs. Therefore much research has been devoted to the planning and scheduling of crews over the last thirty years. The planning and scheduling of crews is a highly complex combinatorial problem that consists two independent phases. The first phase is the Crew Pairing Problem (CPP), which concerns finding a set of tasks with minimum cost while satisfying the service requirements. The second phase is the Crew...
Show moreCrew costs are the second largest direct operating cost of airlines next to fuel costs. Therefore much research has been devoted to the planning and scheduling of crews over the last thirty years. The planning and scheduling of crews is a highly complex combinatorial problem that consists two independent phases. The first phase is the Crew Pairing Problem (CPP), which concerns finding a set of tasks with minimum cost while satisfying the service requirements. The second phase is the Crew Rostering Problem (CRP), which concerns finding work assignment for crewmembers in a given period. In this thesis we focus on a Crew Rostering Problem, where a main pilot and a copilot perform a task. The model is a variance minimization problem with 01 variables and constraints associated with ensuring collective agreements, rules and guaranteeing the production of flights service. We choose a sequential constructive method (heuristic) to solve this difficult combinatorial problem since: (1), minimizing quadratic function of discrete variables makes linear methods difficult to use, a monthly schedule for one hundred pilots can generate tens of thousands variables and millions of constraints; (2), it is a NPhard problem, which means the CPU time of solution searching will grow exponentially as the instance dimension (the number of pilots and the number of tasks) increases. According to the characteristics of the model we propose, we do not find the global optimal solution; we find a satisfactory solution (or near optimal solution). The basic idea in our heuristic method is to decompose the assigning process into many subphases day by day. Then in dealing with minimizing the objective function, two heristic principals are employed. Meanwhile, in coping with the constraints, a weighted matching model and its algorithm will be used. In the numerical simulation, the comprehensive method is tested for its effectiveness. We show that our method can produce a solution whose objective value is below a satisfactory bound.
Show less  Date Issued
 2003
 Identifier
 FSU_migr_etd0944
 Format
 Thesis
 Title
 Contour Modeling by Multiple Linear Regression of the Nineteen Piano Sonatas by Mozart.
 Creator

Beard, R. Daniel, Clendinning, Jane Piper, Song, KaiSheng, Mathes, James R., Spencer, Peter, College of Music, Florida State University
 Abstract/Description

Theories of musical contour can be described as the study of the change in one musical parameter as a function of another. In my dissertation, contour theories proposed by Robert Morris, Michael Friedmann, Elizabeth Marvin, Paul Laprade, Ian Quinn, Robert John Clifford, Larry Polansky and Richard Bassein are reviewed. In general, these authors approach changes in pitch as a function of time. A commonality between these theories was shown to be the use of a system of pitch level identification...
Show moreTheories of musical contour can be described as the study of the change in one musical parameter as a function of another. In my dissertation, contour theories proposed by Robert Morris, Michael Friedmann, Elizabeth Marvin, Paul Laprade, Ian Quinn, Robert John Clifford, Larry Polansky and Richard Bassein are reviewed. In general, these authors approach changes in pitch as a function of time. A commonality between these theories was shown to be the use of a system of pitch level identification based on the relative highness or lowness of the pitches, not based on actual pitch frequencies or pitch intervals in the melody. Additionally, these theories did not account for rhythmic or durational elements of the pitches as they are articulated in time. Music perception studies were cited that indicated that contour can play an important role in the recognition and memory of a melody, and that pitch interval and rhythmic components are vital elements in music understanding. Because these contour theories lacked the important musical elements of pitch and rhythm, an analytical method for the study of musical contour that incorporates both of these in its model of a melody is developed. This analytical method uses the mathematical technique of multiple linear regression to develop a model of the melody that can be graphed as representative of the contour of the actual melody. This method was used to analyze the first themes from the first movements of the nineteen piano sonatas composed by Mozart. Using regression modeling, the sonata melodies were categorized into two melody types: Type MD and Type LB. Analytical methods proposed by other theorists were then used to analyze selected melodies, and a comparison between the multiple linear regression model and these results was made.
Show less  Date Issued
 2003
 Identifier
 FSU_migr_etd1173
 Format
 Thesis
 Title
 Steady Dynamics in Shearing Flows of Nematic Liquid Crystalline Polymers.
 Creator

Liu, Fangyu, Wang, Qi, Sussman, Mark, Song, Kaisheng, Department of Mathematics, Florida State University
 Abstract/Description

The biaxiality of the steady state solutions and their stability to inplane disturbances in shearing flows of nematic liquid crystalline polymers are studied by using simplified Wang (2002) model. We obtain all the steady states of Wang model exhibit biaxial symmetry in which two directors are confined to the shearing plane and analysis their stability with respect to inplane disturbances at isolated Debra numbers and polymer concentration values.
 Date Issued
 2004
 Identifier
 FSU_migr_etd1190
 Format
 Thesis
 Title
 Sparse Grid Stochastic Collocation Techniques for the Numerical Solution of Partial Differential Equations with Random Input Data.
 Creator

Webster, Clayton G. (Clayton Garrett), Gunzburger, Max D., Gallivan, Kyle, Peterson, Janet, Tempone, Raul, Department of Mathematics, Florida State University
 Abstract/Description

The objective of this work is the development of novel, efficient and reliable sparse grid stochastic collocation methods for solving linear and nonlinear partial differential equations (PDEs) with random coefficients and forcing terms (input data of the model). These techniques consist of a Galerkin approximation in the physical domain and a collocation, in probability space, on sparse tensor product grids utilizing either ClenshawCurtis or Gaussian abscissas. Even in the presence of...
Show moreThe objective of this work is the development of novel, efficient and reliable sparse grid stochastic collocation methods for solving linear and nonlinear partial differential equations (PDEs) with random coefficients and forcing terms (input data of the model). These techniques consist of a Galerkin approximation in the physical domain and a collocation, in probability space, on sparse tensor product grids utilizing either ClenshawCurtis or Gaussian abscissas. Even in the presence of nonlinearities, the collocation approach leads to the solution of uncoupled deterministic problems, just as in the Monte Carlo method. The full tensor product spaces suffer from the curse of dimensionality since the dimension of the approximating space grows exponentially in the number of random variables. When this number is moderately large, we combine the advantages of isotropic sparse collocation with those of anisotropic full tensor product collocation: the first approach is effective for problems depending on random variables which weigh equally in the solution; the latter approach is ideal when solving highly anisotropic problems depending on a relatively small number of random variables. We also include a priori and a posteriori procedures to adapt the anisotropy of the sparse grids to each problem. These procedures are very effective for the problems under study. This work also provides a rigorous convergence analysis of the fully discrete problem and demonstrates: (sub)exponential convergence in the asymptotic regime and algebraic convergence in the preasymptotic regime, with respect to the total number of collocation points. Numerical examples illustrate the theoretical results and compare this approach with several others, including the standard Monte Carlo. For moderately large dimensional problems, the sparse grid approach with a properly chosen anisotropy is very efficient and superior to all examined methods. Due to the high cost of effecting each realization of the PDE this work also proposes the use of reducedorder models (ROMs) that assist in minimizing the cost of determining accurate statistical information about outputs from ensembles of realizations. We explore the use of ROMs, that greatly reduce the cost of determining approximate solutions, for determining outputs that depend on solutions of stochastic PDEs. One is then able to cheaply determine much larger ensembles, but this increase in sample size is countered by the lower fidelity of the ROM used to approximate the state. In the contexts of proper orthogonal decompositionbased ROMs, we explore these counteracting effects on the accuracy of statistical information about outputs determined from ensembles of solutions.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd1223
 Format
 Thesis
 Title
 Numerical Methods for TwoPhase Jet Flow.
 Creator

Wang, Yaohong, Sussman, Mark, Alvi, Farrukh S., Ewald, Brian, Quine, Jack, Wang, Xiaoming, Department of Mathematics, Florida State University
 Abstract/Description

Two numerical methods are developed and analyzed for studying twophase jet flows. The first numerical method solves the eigenvalue problem for the matrix system that is constructed from the pseudospectral discretization of the 3D linearized, incompressible, perturbed NavierStokes (NS) equations for twophase flows. This first numerical method will be denoted as LSA for "linear stability analysis." The second numerical method solves the 3D (nonlinear) NS equations for incompressible, two...
Show moreTwo numerical methods are developed and analyzed for studying twophase jet flows. The first numerical method solves the eigenvalue problem for the matrix system that is constructed from the pseudospectral discretization of the 3D linearized, incompressible, perturbed NavierStokes (NS) equations for twophase flows. This first numerical method will be denoted as LSA for "linear stability analysis." The second numerical method solves the 3D (nonlinear) NS equations for incompressible, twophase flows. The second numerical method will be denoted as DNS for "direct numerical simulation." In this thesis, predictions of jetstability using the LSA method are compared with the predictions using DNS. Researchers have not previously compared LSA with DNS for the coflowing twophase jet problem. Researchers have only recently validated LSA with DNS for the simpler RayleighCapillary stability problem [77] [20] [103] [26]. In this thesis, a DNS method has been developed for cylindrical coordinate systems. Researchers have not previously simulated 3D, twophase, jet flow, in cylindrical coordinate systems. The numerical predictions for jet flow are compared: (1) LSA with DNS (2) DNSCLSVOF with DNSLS, and (3) 3D rectangular with 3D cylindrical. "DNSCLSVOF" denotes the coupled level set and volumeoffluid method for computing solutions to incompressible twophase flows [99]. "DNSLS" denotes a novel hybrid level set and volume constraint method for simulating incompressible twophase flows [89]. The following discoveries have been made in this thesis: (1) the DNSCLSVOF method and the DNSLS method both converge under grid refinement to the same results for predicting the breakup of a liquid jet before and after breakup; (2) computing jet breakup in 3D cylindrical coordinate systems is more efficient than computing jet breakup in 3D rectangular coordinate systems; and (3) the LSA method agrees with the DNS method for the initial growth of instabilities (comparison method made for classical RayleighCapillary problem and coflowing jet problem). It is found that for the classical RayleighCapillary stability problem, the LSA prediction differs from the DNS prediction at later times.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd1246
 Format
 Thesis
 Title
 Regulation of Rhythmic Prolactin Secretion: Combined Mathematical and Experimental Study.
 Creator

Toporikova, Natalia, Bertram, Richard, Freeman, Marc E., TabakSznajder, Joel, Quine, John, Sumners, De Witt, Department of Mathematics, Florida State University
 Abstract/Description

The focus of this work is pulsatile prolactin (PRL) secretion, which includes in vivo experiments on PRL release in female rats and mathematical modeling at the system and singlecell level . First we investigate the generation of the semicircadian rhythm of PRL that occurs during the first half of pregnancy in female rats. Using an experimental approach we show that this rhythm can be induced by the injection of oxytocin, suggesting that this hormone is responsible for triggering the rhythm....
Show moreThe focus of this work is pulsatile prolactin (PRL) secretion, which includes in vivo experiments on PRL release in female rats and mathematical modeling at the system and singlecell level . First we investigate the generation of the semicircadian rhythm of PRL that occurs during the first half of pregnancy in female rats. Using an experimental approach we show that this rhythm can be induced by the injection of oxytocin, suggesting that this hormone is responsible for triggering the rhythm. Using mathematical modeling, we propose a likely mechanism for this effect. According to this model, the PRL rhythm is generated by the interaction of hypothalamic neurons and pituitary lactotrophs. In the second part of this work we study PRL release on the singlecell level. First we develop a mathematical model of the pituitary lactotroph and use it to identify the mechanism for the stimulatory effects of dopamine (DA) on lactotrophs. These effects are paradoxical since DA activates only inhibitory ionic currents. We also show cases of bursting in the absence of a slow variable and analyze the dynamic mechanism for this novel form of bursting. Finally, we develop a mathematical model of the effect of endothelin (ET) on PRL secretion from pituitary lactotrophs. This model combines four different biochemical signaling pathways, each of which is activated by ET and mediated by Gproteins.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd1277
 Format
 Thesis
 Title
 An Optimal Control Problem for a TimeDependent GinzburgLandau Model of Superconductivity.
 Creator

Lin, Haomin, Peterson, Janet, Gunzburger, Max, Schwartz, Justin, Wang, Xiaoming, Horne, Rudy, Trenchea, Catalin, Department of Mathematics, Florida State University
 Abstract/Description

The motion of vortices in a Type II superconductor destroys the material's superconductivity because it dissipates energy and causes resistance. When a transport current is applied to a clean TypeII superconductor in the mixed state, the vortices will go into motion due to the induced Lorentz force and thus the superconductivity of the material is lost. However, various pinning mechanisms, such as normal inclusions, can inhibit vortex motion and pin the vortices to specific sites. We...
Show moreThe motion of vortices in a Type II superconductor destroys the material's superconductivity because it dissipates energy and causes resistance. When a transport current is applied to a clean TypeII superconductor in the mixed state, the vortices will go into motion due to the induced Lorentz force and thus the superconductivity of the material is lost. However, various pinning mechanisms, such as normal inclusions, can inhibit vortex motion and pin the vortices to specific sites. We demonstrate that the placement of the normal inclusion sites has an important effect on the largest electrical current that can be applied to the superconducting material while all vortices remain stationary. Here, an optimal control problem using a time dependent GinzburgLandau model is proposed to seek numerically the optimal locations of the normal inclusion sites. An analysis of this optimal control problem is performed, the existence of an optimal control solution is proved and a sensitivity system is given. We then derive a gradient method to solve this optimal control problem. Numerical simulations are performed and the results are presented and discussed.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd1334
 Format
 Thesis
 Title
 ChernSchwartzMacpherson Classes of Graph Hypersurfaces and Schubert Varieties.
 Creator

Stryker, Judson P., Aluﬃ, Paolo, Van Engelen, Robert, Aldrovandi, Ettore, Hironaka, Eriko, Van Hoeij, Mark, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation finds some partial results in support of two positivity conjectures regarding the ChernSchwartzMacPherson (CSM) classes of graph hypersurfaces (conjectured by Aluffi and Marcolli) and Schubert varieties (conjectured by Aluffi and Mihalcea). Direct calculations of some of these CSM classes are performed. Formulas for CSM classes of families of both graph hypersurfaces and coefficients of Schubert varieties are developed. Additionally, the positivity of the CSM class of...
Show moreThis dissertation finds some partial results in support of two positivity conjectures regarding the ChernSchwartzMacPherson (CSM) classes of graph hypersurfaces (conjectured by Aluffi and Marcolli) and Schubert varieties (conjectured by Aluffi and Mihalcea). Direct calculations of some of these CSM classes are performed. Formulas for CSM classes of families of both graph hypersurfaces and coefficients of Schubert varieties are developed. Additionally, the positivity of the CSM class of certain families of these varieties is proven. The first chapter starts with an overview and introduction to the material along with some of the background material needed to understand this dissertation. In the second chapter, a series of equivalences of graph hypersurfaces that are useful for reducing the number of cases that must be calculated are developed. A table of CSM classes of all but one graph with 6 or fewer edges are explicitly computed. This table also contains Fulton Chern classes and Milnor classes for the graph hypersurfaces. Using the equivalences and a series of formulas from a paper by Aluffi and Mihalcea, a new series of formulas for the CSM classes of certain families of graph hypersurfaces are deduced. I prove positivity for all graph hypersurfaces corresponding to graphs with first Betti number of 3 or less. Formulas for graphs equivalent to graphs with 6 or fewer edges are developed (as well as cones over graphs with 6 or fewer edges). In the third chapter, CSM classes of Schubert varieties are discussed. It is conjectured by Aluffi and Mihalcea that all Chern classes of Schubert varieties are represented by effective cycles. This is proven in special cases by B. Jones. I examine some positivity results by analyzing and applying combinatorial methods to a formula by Aluffi and Mihalcea. Positivity of what could be considered the ``typical' case for low codimensional coefficients is found. Some other general results for positivity of certain coefficients of Schubert varieties are found. This technique establishes positivity for some known cases very quickly, such as the codimension 1 case as described by Jones, as well as establishing positivity for codimension 2 and families of cases that were previously unknown. An unexpected connection between one family of cases and a second order PDE is also found. Positivity is shown for all cases of codimensions 14 and some higher codimensions are discussed. In both the graph hypersurfaces and Schubert varieties, all calculated ChernSchwartzMacPherson classes were found to be positive.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1531
 Format
 Thesis
 Title
 Applications of Representation Theory and HigherOrder Perturbation Theory in NMR.
 Creator

Srinivasan, Parthasarathy, Quine, John R., Gan, Zhehong, Chapman, Michael S., Bowers, Philip, Sumners, DeWitt, Department of Mathematics, Florida State University
 Abstract/Description

Solid State Nuclear Magnetic Resonance (NMR) is perhaps the only spectroscopic technique that allows experimentalists to manipulate the spin systems they are interested in. Of particular interest are nuclei with spins greater than 1/2, or quadrupolar nuclei, as they constitute over 70% of the magnetically active spins. Two of the important mathematical tools used in the theory of studying NMR are representation theory together with perturbation theory. We will use both these tools to describe...
Show moreSolid State Nuclear Magnetic Resonance (NMR) is perhaps the only spectroscopic technique that allows experimentalists to manipulate the spin systems they are interested in. Of particular interest are nuclei with spins greater than 1/2, or quadrupolar nuclei, as they constitute over 70% of the magnetically active spins. Two of the important mathematical tools used in the theory of studying NMR are representation theory together with perturbation theory. We will use both these tools to describe the underlying mathematical theory for quadrupolar nuclei. The theory shows that for nonsymmetric satellite transitions in halfinteger quadrupolar nuclei, perturbation effects up to thirdorder feature in the NMR spectra. We will also use irreducible representations to analyze experiments conducted on various spin systems and discuss ways to design new ones. Another topic that will also be explored is the theory of rotary resonance in halfinteger quadrupolar nuclei. This theory explains why techniques like FASTER (FAster Spinning gives Transfer Enhancement at Rotary resonance) improve the efficiency of symmetric multiple quantum experiments.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd1600
 Format
 Thesis
 Title
 Geometric and Computational Generation, Correction, and Simplification of Cortical Surfaces of the Human Brain.
 Creator

Singleton, Lee William, Hurdal, Monica K., Kumar, Piyush, Mio, Washington, Quine, Jack, Department of Mathematics, Florida State University
 Abstract/Description

The generation, correction, and simplification of brain surfaces from magnetic resonance imaging (MRI) data are important for studying brain characteristics, diseases, and functionality. Changes in cortical surfaces are used to compare healthy and diseased populations and they are used to understand how the brain changes as we age. We present several algorithms that use corrected MRI data to create a manifold surface, correct its topology, and simplify the resulting surface. We make...
Show moreThe generation, correction, and simplification of brain surfaces from magnetic resonance imaging (MRI) data are important for studying brain characteristics, diseases, and functionality. Changes in cortical surfaces are used to compare healthy and diseased populations and they are used to understand how the brain changes as we age. We present several algorithms that use corrected MRI data to create a manifold surface, correct its topology, and simplify the resulting surface. We make comparisons of several algorithmic choices and highlight the options that result in surfaces with the most desirable properties. In our discussion of surface generation, we present new approaches and analyze their features. We also provide a simple way to ensure that the created surface is a manifold. We compare our approaches to an existing method by examining the geometric and topological properties of the generated surfaces, including triangle count, surface area, Euler characteristic, and vertex degree. Our chapter on topology correction provides a description of our algorithm that can be used to correct the topology of a surface from the underlying volume data under a specific digital connectivity. We also present notation for new types of digital connectivities and show how our algorithm can be generalized to correct surfaces using these new connectivity schemes on the underlying volume. Our surface simplification algorithm is able to replace surface edges with new points in space rather than being restricted to the surface. We present new formulas for the fast and efficient computation of points for interior as well as boundary edges. We also provide results of several cost functions and report on their performances in surface simplification. Other algorithmic choices are also discussed and evaluated for effectiveness. We are able to produce high quality surfaces that reduce the number of surface triangles by 8586% on average while preserving surface topology, geometry, and anatomical features. On closed surfaces, our algorithm also preserves the volume inside the surface. This work provides an improvement to the general framework of surface processing. We are able to produce high quality surfaces with very few triangles and still maintain the general properties of the surface. These results have applicability to other downstream processes by reducing the processing time of applications such as flattening, inflation, and registration. Our surface results also produce much smaller files for use in future database systems. Furthermore, these algorithms can be applied to other areas of computational anatomy and scientific visualization. They have applicability to fields of medicine, computer graphics, and computational geometry.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd1702
 Format
 Thesis
 Title
 Predegree Polynomials of Plane Configurations in Projective Space.
 Creator

Tzigantchev, Dimitre G. (Dimitre Gueorguiev), Aluﬃ, Paolo, Reina, Laura, Aldrovandi, Ettore, Klassen, Eric, Seppälä, Mika, Department of Mathematics, Florida State University
 Abstract/Description

We work over an algebraically closed ground field of characteristic zero. The group of PGL(4) acts naturally on the projective space P^N parameterizing surfaces of a given degree d in P^3. The orbit of a surface under this action is the image of a rational map from P^15 to P^N. The closure of the orbit is a natural and interesting object to study. Its predegree is defined as the degree of the orbit closure multiplied by the degree of the above map restricted to a general P^j , j being the...
Show moreWe work over an algebraically closed ground field of characteristic zero. The group of PGL(4) acts naturally on the projective space P^N parameterizing surfaces of a given degree d in P^3. The orbit of a surface under this action is the image of a rational map from P^15 to P^N. The closure of the orbit is a natural and interesting object to study. Its predegree is defined as the degree of the orbit closure multiplied by the degree of the above map restricted to a general P^j , j being the dimension of the orbit. We find the predegrees and other invariants for all surfaces supported on unions of planes. The information is encoded in the socalled adjusted predegree polynomials, which possess nice multiplicative properties allowing us to easily compute the predegree (polynomials) of various special plane configurations. The predegree has both a combinatorial and geometric significance. The results obtained in this thesis would be a necessary step in the solution of the problem of computing predegrees for all surfaces.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd1747
 Format
 Thesis
 Title
 Stochastic Volatility Extensions of the Swap Market Model.
 Creator

Tzigantcheva, Milena G. (Milena Gueorguieva), Nolder, Craig, Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec, Quine, Jack, Sumners, De Witt, Department of Mathematics, Florida...
Show moreTzigantcheva, Milena G. (Milena Gueorguieva), Nolder, Craig, Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec, Quine, Jack, Sumners, De Witt, Department of Mathematics, Florida State University
Show less  Abstract/Description

Two stochastic volatility extensions of the Swap Market Model, one with jumps and the other without, are derived. In both stochastic volatility extensions of the Swap Market Model the instantaneous volatility of the forward swap rates evolves according to a squareroot diffusion process. In the jumpdiffusion stochastic volatility extension of the Swap Market Model, the proportional lognormal jumps are applied to the swap rate dynamics. The speed, the flexibility and the accuracy of the fast...
Show moreTwo stochastic volatility extensions of the Swap Market Model, one with jumps and the other without, are derived. In both stochastic volatility extensions of the Swap Market Model the instantaneous volatility of the forward swap rates evolves according to a squareroot diffusion process. In the jumpdiffusion stochastic volatility extension of the Swap Market Model, the proportional lognormal jumps are applied to the swap rate dynamics. The speed, the flexibility and the accuracy of the fast fractional Fourier transform made possible a fast calibration to European swaption market prices. A specific functional form of the instantaneous swap rate volatility structure was used to meet the observed evidence that volatility of the instantaneous swap rate decreases with longer swaption maturity and with larger swaption tenors.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd1762
 Format
 Thesis
 Title
 NoReference Natural Image/Video Quality Assessment of Noisy, Blurry, or Compressed Images/Videos Based on Hybrid Curvelet, Wavelet and Cosine Transforms.
 Creator

Shen, Ji, Erlebacher, Gordon, Bellenot, Steve, Bertram, Richard, Sussman, Mark, Wang, Xiaoming, Liu, Xiuwen, Department of Mathematics, Florida State University
 Abstract/Description

In this thesis, we first propose a new Image Quality Assessment (IQA) method based on a hybrid of curvelet, wavelet, and cosine transforms, called the Hybrid Noreference (HNR) model. From the properties of natural scene statistics, the peak coordinates of the transformed coefficient histogram of filtered natural images occupy welldefined clusters in peak coordinate space, which makes noreference possible. Compared to other methods, HNR has three benefits: (1) It is a noreference method...
Show moreIn this thesis, we first propose a new Image Quality Assessment (IQA) method based on a hybrid of curvelet, wavelet, and cosine transforms, called the Hybrid Noreference (HNR) model. From the properties of natural scene statistics, the peak coordinates of the transformed coefficient histogram of filtered natural images occupy welldefined clusters in peak coordinate space, which makes noreference possible. Compared to other methods, HNR has three benefits: (1) It is a noreference method applicable to arbitrary images without compromising the prediction accuracy of fullreference methods; (2) To the best of our knowledge, it is the only general noreference method wellsuited for four types of image filters: noise, blur, JPEG2000 and JPEG compression; (3) It has excellent performance for additional applications such as the classification of images with subtle differences, hard to detect by the human visual system, the classification of image filter types, and prediction of the noise or blur level of a compressed image. HNR was tested on VIVID (our image library) and LIVE(a public library). When tested against VIVID, HNR has an image quality prediction accuracy above 0.97 measured using correlation coefficients with an average RMS below 7%. Despite the fact that HNR does not use reference images, it compares favorably (except JPEG) to stateoftheart fullreference methods such as PSNR, SSIM, VIF, when tested on the LIVE image database. HNR also predicts noisy or blurry compressed images with a correlation above 0.98. In addition, we extend our image quality assessment methodology to three video quality assessment models. VideoHNR (VHNR) uses 3D curvelet and cosine transforms to study the relation between the extracted features and video quality. VelocityVideoHNR (VVHNR) considers video motion speed to further improve the accuracy of the metric. FrameHNR defines the video quality as the average of the image quality of each video frame. These metrics perform much better than PSNR, the most widely used algorithm.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd1777
 Format
 Thesis
 Title
 Variance Reduction Techniques in Pricing Financial Derivatives.
 Creator

Salta, Emmanuel R., Okten, Giray, Srinivasan, Ashok, Case, Bettye Anne, Ewald, Brian, Nolder, Craig, Quine, John R., Department of Mathematics, Florida State University
 Abstract/Description

In this dissertation, we evaluate existing Monte Carlo estimators and develop new Monte Carlo estimators for pricing financial options with the goal of improving precision. In Chapter 2, we discuss the conditional expectation Monte Carlo estimator for pricing barrier options, and show that the formulas for this estimator that are used in the literature are incorrect. We provide a correct version of the formula. In Chapter 3, we focus on importance sampling methods in estimating the price of...
Show moreIn this dissertation, we evaluate existing Monte Carlo estimators and develop new Monte Carlo estimators for pricing financial options with the goal of improving precision. In Chapter 2, we discuss the conditional expectation Monte Carlo estimator for pricing barrier options, and show that the formulas for this estimator that are used in the literature are incorrect. We provide a correct version of the formula. In Chapter 3, we focus on importance sampling methods in estimating the price of barrier options. We show how a simulated annealing procedure can be used to estimate the parameters required in the importance sampling method. We end this chapter by evaluating the performance of the combined importance sampling and conditional expectation method. In Chapter 4, we analyze the estimators introduced by Ross and Shanthikumar in pricing barrier options and present a numerical example to test their performance.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd2102
 Format
 Thesis
 Title
 Analysis of Two Partial Differential Equation Models in Fluid Mechanics: Nonlinear Spectral EddyViscosity Model of Turbulence and InfinitePrandtlNumber Model of Mantle Convection.
 Creator

Saka, Yuki, Gunzburger, Max D., Wang, Xiaoming, ElAzab, Anter, Peterson, Janet, Wang, Xiaoqiang, Department of Mathematics, Florida State University
 Abstract/Description

This thesis presents two problems in the mathematical and numerical analysis of partial differential equations modeling fluids. The first is related to modeling of turbulence phenomena. One of the objectives in simulating turbulence is to capture the large scale structures in the flow without explicitly resolving the small scales numerically. This is generally accomplished by adding regularization terms to the NavierStokes equations. In this thesis, we examine the spectral viscosity models...
Show moreThis thesis presents two problems in the mathematical and numerical analysis of partial differential equations modeling fluids. The first is related to modeling of turbulence phenomena. One of the objectives in simulating turbulence is to capture the large scale structures in the flow without explicitly resolving the small scales numerically. This is generally accomplished by adding regularization terms to the NavierStokes equations. In this thesis, we examine the spectral viscosity models in which only the highfrequency spectral modes are regularized. The objective is to retain the largescale dynamics while modeling the turbulent fluctuations accurately. The spectral regularization introduces a host of parameters to the model. In this thesis, we rigorously justify effective choices of parameters. The other problem is related to modeling of the mantle flow in the Earth's interior. We study a model equation derived from the Boussinesq equation where the Prandtl number is taken to infinity. This essentially models the flow under the assumption of a large viscosity limit. The novelty in our problem formulation is that the viscosity depends on the temperature field, which makes the mathematical analysis nontrivial. Compared to the constant viscosity case, variable viscosity introduces a secondorder nonlinearity which makes the mathematical question of wellposedness more challenging. Here, we prove this using tools from the regularity theory of parabolic partial differential equations.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd2108
 Format
 Thesis
 Title
 Numerical Optimization Methods on Riemannian Manifolds.
 Creator

Qi, Chunhong, Gallivan, Kyle A., Absil, PierreAntoine, Duke, Dennis, Erlebacher, Gordon, Hussaini, M. Yousuﬀ, Okten, Giray, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation considers the generalization of two wellknown unconstrained optimization algorithms for Rn to solve optimization problems whose constraints can be characterized as a Riemannian manifold. Efficiency and effectiveness are obtained compared to more traditional approaches to Riemannian optimization by applying the concepts of retraction and vector transport. We present a theory of building vector transports on submanifolds of Rn and use the theory to assess convergence...
Show moreThis dissertation considers the generalization of two wellknown unconstrained optimization algorithms for Rn to solve optimization problems whose constraints can be characterized as a Riemannian manifold. Efficiency and effectiveness are obtained compared to more traditional approaches to Riemannian optimization by applying the concepts of retraction and vector transport. We present a theory of building vector transports on submanifolds of Rn and use the theory to assess convergence conditions and computational efficiency of the Riemannian optimization algorithms. We generalize the BFGS method which is an highly effective quasiNewton method for unconstrained optimization on Rn. The Riemannian version, RBFGS, is developed and its convergence and efficiency analyzed. Conditions that ensure superlinear convergence are given. We also consider the Euclidean Adaptive Regularization using Cubics method (ARC) for unconstrained optimization on Rn. ARC is similar to trust region methods in that it uses a local model to determine the modification to the current estimate of the optimal solution. Rather than a quadratic local model and constraints as in a trust region method, ARC uses a parameterized local cubic model. We present a generalization, the Riemannian Adaptive Regularization using Cubics method (RARC), along with global and local convergence theory. The efficiency and effectiveness of the RARC and RBFGS methods are investigated and their performance compared to the predictions made by the convergence theory via a series of optimization problems on various manifolds.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd2263
 Format
 Thesis
 Title
 Impulse Control Problems under NonConstant Volatility.
 Creator

Moreno, Juan F. (Juan Felipe), Kercheval, Alec, Huﬀer, Fred, Beaumont, Paul, Nichols, Warren, Nolder, Craig, Wang, Xiaoming, Department of Mathematics, Florida State University
 Abstract/Description

The objective of this dissertation is to study impulse control problems in situations where the volatility of the underlying process is not constant. First, we explore the case where the dynamics of the underlying process are modified for a fixed (or random with known probability distribution) period of time after each intervention of the impulse control. We propose a modified intervention operator to be used in the QuasiVariational Inequalities approach for solving impulse control problems,...
Show moreThe objective of this dissertation is to study impulse control problems in situations where the volatility of the underlying process is not constant. First, we explore the case where the dynamics of the underlying process are modified for a fixed (or random with known probability distribution) period of time after each intervention of the impulse control. We propose a modified intervention operator to be used in the QuasiVariational Inequalities approach for solving impulse control problems, and we formulate and prove a verification theorem for finding the Value Function of the problem and the optimal control. Secondly, we use a perturbation approach to tackle impulse control problems when the volatility of the underlying process is stochastic but meanreverting. The perturbation method permits to approximate the Value Function and the parameters of the optimal control. Finally, we present a numerical scheme to obtain solutions to impulse control problems with constant and stochastic volatility. Throughout the thesis we find explicit solutions to practical applications in financial mathematics; specifically, in optimal central bank intervention of the exchange rate and in optimal policy dividend payments.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd2271
 Format
 Thesis
 Title
 A Stock Market AgentBased Model Using Evolutionary Game Theory and Quantum Mechanical Formalism.
 Creator

Montin, Benoit S., Nolder, Craig A., Huﬀer, Fred W., Case, Bettye Anne, Beaumont, Paul M., Kercheval, Alec N., Sumners, DeWitt L., Department of Mathematics, Florida State...
Show moreMontin, Benoit S., Nolder, Craig A., Huﬀer, Fred W., Case, Bettye Anne, Beaumont, Paul M., Kercheval, Alec N., Sumners, DeWitt L., Department of Mathematics, Florida State University
Show less  Abstract/Description

The financial market is modelled as a complex selforganizing system. Three economic agents interact in a simplified economy and seek the maximization of their wealth. Replicator dynamics are used as a myopic behavioral rule to describe how agents learn and benefit from their experiences. Stock price fluctuations result from interactions between economic agents, budget constraints and conservation laws. Time is discrete. Invariant distributions over the state space, that is to say probability...
Show moreThe financial market is modelled as a complex selforganizing system. Three economic agents interact in a simplified economy and seek the maximization of their wealth. Replicator dynamics are used as a myopic behavioral rule to describe how agents learn and benefit from their experiences. Stock price fluctuations result from interactions between economic agents, budget constraints and conservation laws. Time is discrete. Invariant distributions over the state space, that is to say probability measures that remain unchanged by the oneperiod transition rule, form stochastic equilibria for our composite system. When agents make mistakes, there is a unique stochastic steady state which reflects the average and limit behavior. Convergence of the iterates occurs at a geometric rate in the total variation norm. Interestingly, when the probability of making a mistake tends to zero, the invariant distribution converges weakly to a stochastic equilibrium for the model without mistakes. Most agentbased computational economies heavily rely on simulations. Having adopted a simple representation of financial markets, we have been able to prove the above theoretical results and gain intuition on complexity economics. The impact of simple monetary policies on the limit stock price distribution, such as a decrease of the riskfree rate of interest, has been analyzed. Of interest as well, the limit stock log return distribution presents realworld features (skewed and leptokurtic) that more traditional models usually fail to explain or consider. Our artificial market is incomplete. The bid and ask prices of a vanilla Call option have been computed to illustrate option pricing in our setting.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd2331
 Format
 Thesis
 Title
 Factoring Univariate Polynomials over the Rationals.
 Creator

Novocin, Andrew, Van Hoeij, Mark, Van Engelen, Robert, Agashe, Amod, Aldrovandi, Ettore, Aluﬃ, Paolo, Department of Mathematics, Florida State University
 Abstract/Description

This thesis presents an algorithm for factoring polynomials over the rationals which follows the approach of the van Hoeij algorithm. The key theoretical novelty in our approach is that it is et up in a way that will make it possible to prove a new complexity result for this algorithm which was actually observed on prior algorithms. One difference of this algorithm from prior algorithms is the practical improvement which we call early termination. Our algorithm should outperform prior...
Show moreThis thesis presents an algorithm for factoring polynomials over the rationals which follows the approach of the van Hoeij algorithm. The key theoretical novelty in our approach is that it is et up in a way that will make it possible to prove a new complexity result for this algorithm which was actually observed on prior algorithms. One difference of this algorithm from prior algorithms is the practical improvement which we call early termination. Our algorithm should outperform prior algorithms in many common classes of polynomials (including irreducibles).
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd2515
 Format
 Thesis
 Title
 Centroidal Voronoi Tessellations for Mesh Generation: from Uniform to Anisotropic Adaptive Triangulations.
 Creator

Nguyen, Hoa V., Gunzburger, Max D., ElAzab, Anter, Peterson, Janet, Wang, Xiaoming, Wang, Xiaoqiang, Department of Mathematics, Florida State University
 Abstract/Description

Mesh generation in regions in Euclidean space is a central task in computational science, especially for commonly used numerical methods for the solution of partial differential equations (PDEs), e.g., finite element and finite volume methods. Mesh generation can be classified into several categories depending on the element sizes (uniform or nonuniform) and shapes (isotropic or anisotropic). Uniform meshes have been well studied and still find application in a wide variety of problems....
Show moreMesh generation in regions in Euclidean space is a central task in computational science, especially for commonly used numerical methods for the solution of partial differential equations (PDEs), e.g., finite element and finite volume methods. Mesh generation can be classified into several categories depending on the element sizes (uniform or nonuniform) and shapes (isotropic or anisotropic). Uniform meshes have been well studied and still find application in a wide variety of problems. However, when solving certain types of partial differential equations for which the solution variations are large in some regions of the domain, nonuniform meshes result in more efficient calculations. If the solution changes more rapidly in one direction than in others, nonuniform anisotropic meshes are preferred. In this work, first we present an algorithm to construct uniform isotropic meshes and discuss several mesh quality measures. Secondly we construct an adaptive method which produces nonuniform anisotropic meshes that are well suited for numerically solving PDEs such as the convection diffusion equation. For the uniform Delaunay triangulation of planar regions, we focus on how one selects the positions of the vertices of the triangulation. We discuss a recently developed method, based on the centroidal Voronoi tessellation (CVT) concept, for effecting such triangulations and present two algorithms, including one new one, for CVTbased grid generation. We also compare several methods, including CVTbased methods, for triangulating planar domains. Furthermore, we define several quantitative measures of the quality of uniform grids. We then generate triangulations of several planar regions, including some having complexities that are representative of what one may encounter in practice. We subject the resulting grids to visual and quantitative comparisons and conclude that all the methods considered produce highquality uniform isotropic grids and that the CVTbased grids are at least as good as any of the others. For more general grid generation settings, e.g., nonuniform and/or anistropic grids, such quantitative comparisons are much more difficult, if not impossible, to either make or interpret. This motivates us to develop CVTbased adaptive nonuniform anisotropic mesh refinement in the context of solving the convectiondiffusion equation with emphasis on convectiondominated problems. The challenge in the numerical approximation of this equation is due to large variations in the solution over small regions of the physical domain. Our method not only refines the underlying grid at these regions but also stretches the elements according to the solution variation. Three main ingredients are incorporated to improve the accuracy of numerical solutions and increase the algorithm's robustness and efficiency. First, a streamline upwind Petrov Galerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor. Our algorithm has been tested on a variety of 2dimensional examples. It is robust in detecting layers and efficient in resolving nonphysical oscillations in the numerical approximation.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd2616
 Format
 Thesis
 Title
 DNA Knotting: Occurrences, Consequences & Resolution.
 Creator

Mann, Jennifer Katherine, Sumners, De Witt L., Zechiedrich, E. Lynn, Greenbaum, Nancy L., Heil, Wolfgang, Quine, Jack, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation applies knot theory, DNA topology, linear algebra, statistics, probability theory and statistical mechanics to address questions about knotted, doublestranded DNA. The three main investigations are the cellular effects of knotting, the biophysics of knotting/unknotting and the unknotting mechanism of human topoisomerase IIá. The cellular effects of knotting were done in collaboration with Rick Deibler. The statistical mechanics were done in collaboration with Zhirong Liu...
Show moreThis dissertation applies knot theory, DNA topology, linear algebra, statistics, probability theory and statistical mechanics to address questions about knotted, doublestranded DNA. The three main investigations are the cellular effects of knotting, the biophysics of knotting/unknotting and the unknotting mechanism of human topoisomerase IIá. The cellular effects of knotting were done in collaboration with Rick Deibler. The statistical mechanics were done in collaboration with Zhirong Liu and Hue Sun Chan. Cellular DNA knotting is driven by DNA compaction, topoisomerization, replication, supercoilingpromoted strand collision, and DNA selfinteractions resulting from transposition, sitespecific recombination, and transcription (Spengler, Stasiak, and Cozzarelli 1985; Heichman, Moskowitz, and Johnson 1991; Wasserman and Cozzarelli 1991; Sogo, Stasiak, MartinezRobles et al. 1999). Type II topoisomerases are ubiquitous, essential enzymes that interconvert DNA topoisomers to resolve knots. These enzymes pass one DNA helix through another by creating an enzymebridged transient break. Explicitly how type II topoisomerases recognize their substrate and decide where to unknot DNA is unknown. What are the biological consequences of unresolved cellular DNA knotting? We investigated the physiological consequences of the wellaccepted propensity of cellular DNA to collide and react with itself by analyzing the effects of plasmid recombination and knotting in E. coli using a sitespecific recombination system. Fluctuation assays were performed to determine mutation rates of the strains used in these experiments (Rosche and Foster 2000). Our results show that DNA knotting: (i) promotes replicon loss by blocking DNA replication, (ii) blocks gene transcription, (iii) increases antibiotic sensitivity and (iv) promotes genetic rearrangements at a rate which is four orders of magnitude greater than of an unknotted plasmid. If unresolved, DNA knots can be lethal and may help drive genetic evolution. The faster and more efficiently type II topoisomerase unknots, the less chance for these disastrous consequences. How do type II topoisomerases unknot, rather than knot? If type II topoisomerases act randomly on juxtapositions of two DNA helices, knots are produced with probability depending on the length of the circular DNA substrate. For example, random strand passage is equivalent to random cyclization of linear substrate, and random cyclization of 10.5 kb substrate produces about 3% DNA knots, mostly trefoils (Rybenkov, Cozzarelli, and Vologodskii 1993; Shaw and Wang 1993). However, experimental data show that type II topoisomerases unknot at a level up to 90fold the level achieved by steadystate random DNA strand passage (Rybenkov, Ullsperger, and Vologodskii et al. 1997). Various models have been suggested to explain these results and all of them assume that the enzyme directs the process. In contrast, our laboratory proposed (Buck and Zechiedrich 2004) that type II topoisomerases recognize the curvature of the two DNA helices within a juxtaposition and the resulting angle between the helices. Furthermore, the values of curvature and angle lie within their respective bounds, which are characteristic of DNA knots. Thus, our model uniquely proposes unknotting is directed by the DNA and not the protein. We used statistical mechanics to test this hypothesis. Using a lattice polymer model, we generated conformations from preexisting juxtaposition geometries and studied the resulting knot types. First we determined the statistical relationship between the local geometry of a juxtaposition of two chain segments and whether the loop is knotted globally. We calculated the HOMFLY (Freyd, Yetter, and Hoste et al. 1985) polynomial of each conformation to identify knot types. We found that hooked juxtapositions are far more likely to generate knots than free juxtapositions. Next we studied the transitions between initial and final knot/unknot states that resulted from a type II topoisomeraselike segment passage at the juxtaposition. Selective segment passages at free juxtapositions tended to increase knot probability. In contrast, segment passages at hooked juxtapositions caused more transitions from knotted to unknot states than vice versa, resulting in a steadystate knot probability much less than that at topological equilibrium. In agreement with experimental type II topoisomerase results, the tendency of a segment passage at a given juxtaposition to unknot is strongly correlated with the tendency of that segment passage to decatenate. These quantitative findings show that there exists discriminatory topological information in local juxtaposition geometries that could be utilized by the enzyme to unknot rather than knot. This contrasts with prior thought that the enzyme itself directs unknotting and strengthens the hypothesis proposed by our group that type II topoisomerases act on hooked rather than free juxtapositions. Will a type II topoisomerase resolve a DNA twist knot in one cycle of action? The group of knots known as twist knots is intriguing from both knot theoretical and biochemical perspectives. A twist knot consists of an interwound region with any number of crossings and a clasp with two crossings. By reversing one of the crossings in the clasp the twist knot is converted to the unknot. However, a crossing change in the interwound region produces a twist knot with two less nodes. Naturally occurring knots in cells are twist knots. The unknotting number, the minimal number of crossing reversals required to convert a knot to the unknot, is equal to one for any twist knot. Each crossing reversal performed by a type II topoisomerase requires energy. Within the cell, DNA knots might be pulled tight by forces such as those which accompany transcription, replication and segregation, thus increasing the likelihood of DNA damage. Therefore, it would be advantageous for type II topoisomerases to act on a crossing in the clasp region of a DNA twist knot, thus, resolving the DNA knot in a single step. The mathematical unknotting number corresponds to the smallest number of topoisomerase strand passage events needed to untie a DNA knot. In order to study unknotting of DNA knots by a type II topoisomerase, I used sitespecific recombination systems and a benchtop fermentor to isolate large quantities of knotted DNA. My data show that purified five and sevennoded twist knots are converted to the unknot by human topoisomerase IIá with no appearance of either trefoils or fivenoded twist knots which are possible intermediates if the enzyme acted on one of the interwound nodes. Consequently, these data suggest that type II topoisomerase may preferentially act upon the clasp region of a twist knot. We have uniquely combined biology, chemistry, physics and mathematics to gain insight into the mechanism of type II topoisomerases, which are an important class of drug targets. Our results suggest that DNA knotting alters DNA structure in a way that may drive type II topoisomerase resolution of DNA knots. Ultimately, the knowledge gained about type II topoisomerases and their unknotting mechanism may lead to the development of new drugs and treatments of human infectious diseases and cancer.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd2754
 Format
 Thesis
 Title
 Asset Pricing in a Lucas Framework with Boundedly Rational, Heterogeneous Agents.
 Creator

Culham, Andrew J. (Andrew James), Beaumont, Paul M., Kercheval, Alec N., Schlagenhauf, Don, Goncharov, Yevgeny, Kopriva, David, Department of Mathematics, Florida State University
 Abstract/Description

The standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agentbased computational models where artificial economies are populated with agents who trade in stylized asset...
Show moreThe standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agentbased computational models where artificial economies are populated with agents who trade in stylized asset markets. Although they offer a great deal of flexibility, the theoretical community has often criticized these agentbased models because the agents are too limited in their analytical abilities. In this work, we create an artificial market with a single risky asset and populate it with fully optimizing, forward looking, infinitely lived, heterogeneous agents. We restrict the state space of our agents by not allowing them to observe the aggregate distribution of wealth so they are required to compute their conditional demand functions while simultaneously learning the equations of motion for the aggregate state variables. We develop an efficient and flexible model code that can be used to explore a wide number of asset pricing questions while remaining consistent with conventional asset pricing theory. We validate our model and code against known analytical solutions as well as against a new analytical result for agents with differing discount rates. Our simulation results for general cases without known analytical solutions show that, in general, agents' asset holdings converge to a steadystate distribution and the agents are able to learn the equilibrium prices despite the restricted state space. Further work will be necessary to determine whether the exceptional cases have some fundamental theoretical explanation or can be attributed to numerical issues. We conjecture that convergence to the equilibrium is global and that the marketclearing price acts to guide the agents' forecasts toward that equilibrium.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd2948
 Format
 Thesis
 Title
 Rheology and Mesoscale Morphology of Flows of Chlesteric and Nematic Liquid Crystal Polymers.
 Creator

Cui, Zhenlu, Wang, Qi, Liu, Guosheng, Magnan, Jerry F., Sussman, Mark, Tam, Christopher, Department of Mathematics, Florida State University
 Abstract/Description

Cholesteric liquid crystals(CLC) are mesophases, where the average direction of molecular orientation exhibits a chiral (twisted) pattern along its normal direction. In the past, the rheological and flow properties of CLC have been studied scarcely. This is due to the natural tendency of a cholesteric to favor its characteric, twisted configuration, which naturally leads to more complex arrangements of the optic axis than in pure nematics and complicated spatial structures. In this...
Show moreCholesteric liquid crystals(CLC) are mesophases, where the average direction of molecular orientation exhibits a chiral (twisted) pattern along its normal direction. In the past, the rheological and flow properties of CLC have been studied scarcely. This is due to the natural tendency of a cholesteric to favor its characteric, twisted configuration, which naturally leads to more complex arrangements of the optic axis than in pure nematics and complicated spatial structures. In this dissertation, we address the issues related to rheology and flow induced structures in CLC and nematic polymers, with emphasis on the role of the anisotropic elasticities. In the first part of this dissertation, we study the permeation flow problem using a mesoscopic theory obtained from the kinetic theory for Cholesteric liquid crystal polymers and resolve the inconsistency issue in the literature. Then we give a systematic study on steady structures and transient behavior in flows of nematic polymers. In the second part of this dissertation, we develop a hydrodynamic theory for flows of CLCPs following the continuum mechanics formulation of McMillan's second order tensor theory for liquid crystals and study phase transition in chiral nematic liquid crystals as well as the rheological hebaviors and the flow properties of CLCPs.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd2952
 Format
 Thesis
 Title
 Singleand MultipleObjective Stochastic Programming Models with Applications to Aerodynamics.
 Creator

Croicu, AnaMaria, Hussaini, M. Yousuﬀ, Srivastava, Anuj, Kopriva, David, Wang, Qi, Department of Mathematics, Florida State University
 Abstract/Description

Deterministic design assumes that there is no uncertainty in the modeling parameters, and as a consequence, there is no variability in the simulation outputs. Therefore, deterministic optimal designs that are obtained without taking into account uncertainty are usually unreliable. This is the case with transonic shape optimization, where the randomness in the cruise Mach number might have significant impact on the optimal geometric design. In this context, a stochastic search turns out to be...
Show moreDeterministic design assumes that there is no uncertainty in the modeling parameters, and as a consequence, there is no variability in the simulation outputs. Therefore, deterministic optimal designs that are obtained without taking into account uncertainty are usually unreliable. This is the case with transonic shape optimization, where the randomness in the cruise Mach number might have significant impact on the optimal geometric design. In this context, a stochastic search turns out to be more appropriate. Approaches to stochastic optimization have followed a variety of modeling philosophies, but little has been done to systematically compare different models. The goal of this thesis is to present a comparison between two stochastic optimization algorithms, with the emphasis on applications, especially on the airfoil shape optimization. Singleobjective and multiobjective optimization programs are analyzed as well. The relationship between the expected minimum value (EMV) criterion and the minimum expected value (MEV) criterion is explored, and it is shown that, under favorable conditions, a better optimal point could be obtained via the EMV approach. Unfortunately, the advantages of using the EMV approach are far outweighed by the prohibitive exorbitant computational cost.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3027
 Format
 Thesis
 Title
 Solutions of Second Order Recurrence Relations.
 Creator

Levy, Giles, Van Hoeij, Mark, Van Engelen, Robert A., Aldrovandi, Ettore, Aluﬃ, Paolo, Department of Mathematics, Florida State University
 Abstract/Description

This thesis presents three algorithms each of which returns a transformation from a base equation to the input using transformations that preserve order and homogeneity (referred to as gttransformations). The first and third algorithm are new and the second algorithm is an improvement over prior algorithms for the second order case. The first algorithm `Find 2F1' finds a gttransformation to a recurrence relation satisfied by a hypergeometric series u(n) = hypergeom([a+n, b],[c],z), if such...
Show moreThis thesis presents three algorithms each of which returns a transformation from a base equation to the input using transformations that preserve order and homogeneity (referred to as gttransformations). The first and third algorithm are new and the second algorithm is an improvement over prior algorithms for the second order case. The first algorithm `Find 2F1' finds a gttransformation to a recurrence relation satisfied by a hypergeometric series u(n) = hypergeom([a+n, b],[c],z), if such a transformation exists. The second algorithm `Find Liouvillian' finds a gttransformation to a recurrence relation of the form u(n+2) + b(n)u(n) = 0 for some b(n) in C(n), if such a transformation exists. The third algorithm `Database Solver' takes advantage of a large database of sequences, `The OnLine Encyclopedia of Integer Sequences' maintained by Neil A. J. Sloane at AT&T Labs Research. It employs this database by using the recurrence relations that they satisfy as base equations from which to return a gttransformation, if such a transformation exists.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd3099
 Format
 Thesis
 Title
 Openmath Library for Computing on Riemann Surfaces.
 Creator

Lebedev, Yuri, Seppälä, Mika, Van Engelen, Robert, Van Hoeij, Mark, Aluﬃ, Paolo, Department of Mathematics, Florida State University
 Abstract/Description

This thesis carefully reviews computational methods that will act as a tool in the research of Riemann surfaces. We are interested in representing a Riemann surface from many equivalent points of view. The goal is to define a Riemann surface so it can be freely and unambiguously exchanged between mathematical servers by creating a set of suitable OpenMath CDs.
 Date Issued
 2008
 Identifier
 FSU_migr_etd3208
 Format
 Thesis
 Title
 Biomedical Applications of Shape Descriptors.
 Creator

Celestino, Christian Edgar Laing, Sumners, De Witt, Greenbaum, Nancy, Mio, Washington, Hurdal, Monica, Department of Mathematics, Florida State University
 Abstract/Description

Given an edgeoriented polygonal graph in R3, we describe a method for computing the writhe as the average of weighted directional writhe numbers of the graph in a few directions. These directions are determined by the graph and the weights are determined by areas of pathconnected open regions on the unit sphere. Within each open region, the directional writhe is constant. We developed formulas for the writhe of polygons on Bravais lattices and a few crystallographic groups, and discuss...
Show moreGiven an edgeoriented polygonal graph in R3, we describe a method for computing the writhe as the average of weighted directional writhe numbers of the graph in a few directions. These directions are determined by the graph and the weights are determined by areas of pathconnected open regions on the unit sphere. Within each open region, the directional writhe is constant. We developed formulas for the writhe of polygons on Bravais lattices and a few crystallographic groups, and discuss applications to ring polymers. In addition, we obtained a closed formula for the writhe for graphs which extends the formula for the writhe of a polygon in R3, including the important special case of writhe of embedded open arcs. Additionally, we have developed shape descriptors based on a family of geometric measures for the purpose of classification and identification of shape differences for graphs. These shape descriptors involve combinations of writhe and average crossing numbers of curves, as well as total curvature, ropelength and thickness. We have applied these shape descriptors to RNA tertiary structures and families of sulcal curves from human brain surfaces. Preliminary results give an automatic method to distinguish RNA motifs. Clear differentiation among tRNA and/or ribozymes, and a distinction among mesophilic and thermophilic tRNA is shown. In addition, we notice a direct correlation between the length of an RNA backbone and its mean average crossing number which is described accurately by a power function. As a neuroscience application, human brain surfaces were extracted from MRI scans of human brains. In our preliminary results, an automatic differentiation between sulcal paths from the left or right hemispheres, an age differentiation and a malefemale classification were achieved.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3314
 Format
 Thesis
 Title
 All Speed MultiPhase Flow Solvers.
 Creator

Kadioglu, Samet Y., Sussman, Mark, Telotte, John, Hussaini, Yousuﬀ, Wang, Qi, Erlebacher, Gordon, Department of Mathematics, Florida State University
 Abstract/Description

A new second order primitive preconditioner technique (an all speed method) for solving all speed single/multiphase flow is presented. With this technique, one can compute both compressible and incompressible flows with Machuniform accuracy and efficiency (i.e., accuracy and efficiency of the method are independent of Mach numbers). The new primitive preconditioner (all speed/Mach uniform) technique can handle both strong and weak shocks, providing highly resolved shock solutions together...
Show moreA new second order primitive preconditioner technique (an all speed method) for solving all speed single/multiphase flow is presented. With this technique, one can compute both compressible and incompressible flows with Machuniform accuracy and efficiency (i.e., accuracy and efficiency of the method are independent of Mach numbers). The new primitive preconditioner (all speed/Mach uniform) technique can handle both strong and weak shocks, providing highly resolved shock solutions together with correct shock speeds. In addition, the new technique performs very well at the zero Mach limit. In the case of multiphase flow, the new primitive preconditioner technique enables one to accurately treat phase boundaries in which there is a large impedance mismatch. When solving multidimensional all speed multiphase flows, we introduce adaptive solution techniques which exploit the advantages of Machuniform methods. We compute a variety of problems from low (low speed) to high Mach number (high speed) flows including multiphase flow tests, i.e, computing the growth and collapse of adiabatic bubbles for study of underwater explosions
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3391
 Format
 Thesis
 Title
 Intersection Numbers of Divisors in Graph Varieties.
 Creator

Jones, Deborah, Aluffi, Paolo, Aldrovandi, Ettore, Hironaka., Eriko, Klassen, Eric, Reina, Laura, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation studies certain intersection numbers of exceptional divisions arising from blowing up subspaces of lattices associated to graphs. These permit the computation of the Segre class of a scheme associated to the graph/lattice. Explicit formulas are provided for lattices associated to trees and several patterns among these numbers are explored. The problem can be related to the study of socalled Cremona transformations. It is shown that the geometry of such transformations...
Show moreThis dissertation studies certain intersection numbers of exceptional divisions arising from blowing up subspaces of lattices associated to graphs. These permit the computation of the Segre class of a scheme associated to the graph/lattice. Explicit formulas are provided for lattices associated to trees and several patterns among these numbers are explored. The problem can be related to the study of socalled Cremona transformations. It is shown that the geometry of such transformations explain a certain symmetry pattern we discovered.
Show less  Date Issued
 2003
 Identifier
 FSU_migr_etd3426
 Format
 Thesis
 Title
 Calibration of Multivariate Generalized Hyperbolic Distributions Using the EM Algorithm, with Applications in Risk Management, Portfolio Optimization and Portfolio Credit Risk.
 Creator

Hu, Wenbo, Kercheval, Alec, Huﬀer, Fred, Case, Bettye, Nichols, Warren, Nolder, Craig, Department of Mathematics, Florida State University
 Abstract/Description

The distributions of many financial quantities are wellknown to have heavy tails, exhibit skewness, and have other nonGaussian characteristics. In this dissertation we study an especially promising family: the multivariate generalized hyperbolic distributions (GH). This family includes and generalizes the familiar Gaussian and Student t distributions, and the socalled skewed t distributions, among many others. The primary obstacle to the applications of such distributions is the numerical...
Show moreThe distributions of many financial quantities are wellknown to have heavy tails, exhibit skewness, and have other nonGaussian characteristics. In this dissertation we study an especially promising family: the multivariate generalized hyperbolic distributions (GH). This family includes and generalizes the familiar Gaussian and Student t distributions, and the socalled skewed t distributions, among many others. The primary obstacle to the applications of such distributions is the numerical difficulty of calibrating the distributional parameters to the data. In this dissertation we describe a way to stably calibrate GH distributions for a wider range of parameters than has previously been reported. In particular, we develop a version of the EM algorithm for calibrating GH distributions. This is a modification of methods proposed in McNeil, Frey, and Embrechts (2005), and generalizes the algorithm of Protassov (2004). Our algorithm extends the stability of the calibration procedure to a wide range of parameters, now including parameter values that maximize loglikelihood for our real market data sets. This allows for the first time certain GH distributions to be used in modeling contexts when previously they have been numerically intractable. Our algorithm enables us to make new uses of GH distributions in three financial applications. First, we forecast univariate ValueatRisk (VaR) for stock index returns, and we show in outofsample backtesting that the GH distributions outperform the Gaussian distribution. Second, we calculate an efficient frontier for equity portfolio optimization under the skewedt distribution and using Expected Shortfall as the risk measure. Here, we show that the Gaussian efficient frontier is actually unreachable if returns are skewed t distributed. Third, we build an intensitybased model to price Basket Credit Default Swaps by calibrating the skewed t distribution directly, without the need to separately calibrate xi the skewed t copula. To our knowledge this is the first use of the skewed t distribution in portfolio optimization and in portfolio credit risk.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3694
 Format
 Thesis
 Title
 A Computational Study of Ion Conductance in the KcsA K⁺ Channel Using a NernstPlanck Model with Explicit Resident Ions.
 Creator

Jung, Yong Woon, Mascagni, Michael A., Huﬀer, Fred, Bowers, Philip, Klassen, Eric, Cogan, Nick, Department of Mathematics, Florida State University
 Abstract/Description

In this dissertation, we describe the biophysical mechanisms underlying the relationship between the structure and function of the KcsA K+ channel. Because of the conciseness of electrodiffusion theory and the computational advantages of a continuum approach, NernstPlanck (NP) type models such as the GoldmanHodgkinKatz (GHK) and PoissonNernstPlanck (PNP) models have been used to describe currents in ion channels. However, the standard PNP (SPNP) model is known to be inapplicable to...
Show moreIn this dissertation, we describe the biophysical mechanisms underlying the relationship between the structure and function of the KcsA K+ channel. Because of the conciseness of electrodiffusion theory and the computational advantages of a continuum approach, NernstPlanck (NP) type models such as the GoldmanHodgkinKatz (GHK) and PoissonNernstPlanck (PNP) models have been used to describe currents in ion channels. However, the standard PNP (SPNP) model is known to be inapplicable to narrow ion channels because it cannot handle discrete ion properties. To overcome this weakness, we formulated the explicit resident ions NernstPlanck (ERINP) model, which applies a local explicit model where the continuum model fails. Then we tested the effects of the ERI Coulomb potential, the ERI induced potential, and the ERI dielectric constant for ion conductance were tested in the ERINP model. Using the currentvoltage (IV ) and currentconcentration (IC) relationships determined from the ERINP model, we discovered biologically significant information that is unobtainable from the traditional continuum model. The mathematical analysis of the K+ ion dynamics revealed a tight structurefunction system with a shallow well, a deep well, and two K+ ions resident in the selectivity filter. We also demonstrated that the ERINP model not only reproduced the experimental results with a realistic set of parameters, it also reduced CPU costs.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd3741
 Format
 Thesis
 Title
 4D Var Data Assimilation and POD Model Reduction Applied to Geophysical Dynamics Models.
 Creator

Chen, Xiao, Navon, Ionel Michael, Sussman, Mark, Hart, Robert, Wang, Xiaoming, Gordon, Erlebacher, Department of Mathematics, Florida State University
 Abstract/Description

Standard spatial discretization schemes for dynamical system (DS), usually lead to largescale, highdimensional, and in general, nonlinear systems of ordinary differential equations.Due to limited computational and storage capabilities, Reduced Order Modeling (ROM) techniques from system and control theory provide an attractive approach to approximate the largescale discretized state equations using lowdimensional models. The objective of 4D variational data assimilation (4D Var) is to...
Show moreStandard spatial discretization schemes for dynamical system (DS), usually lead to largescale, highdimensional, and in general, nonlinear systems of ordinary differential equations.Due to limited computational and storage capabilities, Reduced Order Modeling (ROM) techniques from system and control theory provide an attractive approach to approximate the largescale discretized state equations using lowdimensional models. The objective of 4D variational data assimilation (4D Var) is to obtain the minimum of a cost functional estimating the discrepancy between the model solutions and distributed observations in time and space. A control reduction methodology based on Proper Orthogonal Decomposition (POD), referred to as POD 4D Var, has been widely used for nonlinear systems with tractable computations. However, the appropriate criteria for updating a POD ROM are not yet known in the application to optimal control. This is due to the limited validity of the POD ROM for inverse problems. Therefore, the classical TrustRegion (TR) approach combined with POD (TRPOD) was recently proposed as a way to alleviate the above difficulties. There is a global convergence result for TR, and benefiting from the trustregion philosophy, rigorous convergence results guarantee that the iterates produced by the TRPOD algorithm will converge to the solution of the original optimization problem. In order to reduce the POD basis size and still achieve the global convergence, a method was proposed to incorporate information from the 4D Var system into the ROM procedure by implementing a dual weighted POD (DWPOD) method. The first new contribution in my dissertation consists in studying a new methodology combining the dual weighted snapshots selection and trust region POD adaptivity (DWTRPOD). Another new contribution is to combine the incremental POD 4D Var, balanced truncation techniques and method of snapshots methodology. In the linear DS, this is done by integrating the linear forward model many times using different initial conditions in order to construct an ensemble of snapshots so as to generate the forward POD modes. Then those forward POD modes will serve as the initial conditions for its corresponding adjoint system. We then integrate the adjoint system a large number of times based on different initial conditions generated by the forward POD modes to construct an ensemble of adjoint snapshots. From this ensemble of adjoint snapshots, we can generate an ensemble of socalled adjoint POD modes. Thus we can approximate the controllability Grammian of the adjoint system instead of solving the computationally expensive coupled Lyapunov equations. To sum up, in the incremental POD 4D Var, we can approximate the controllability Grammian by integrating the TLM a number of times and approximate observability Grammian by integrating its adjoint also a number of times. A new idea contributed in this dissertation is to extend the snapshots based POD methodology to the nonlinear system. Furthermore, we modify the classical algorithms in order to save the computations even more significantly. We proposed a novel idea to construct an ensemble of snapshots by integrating the tangent linear model (TLM) only once, based on which we can obtain its TLM POD modes. Then each TLM POD mode will be used as an initial condition to generate a small ensemble of adjoint snapshots and their adjoint POD modes. Finally, we can construct a large ensemble of adjoint POD modes by putting together each small ensemble of adjoint POD modes. To sum up, our idea in a forthcoming study is to test approximations of the controllability Grammian by integrating TLM once and observability Grammian by integrating adjoint model a reduced number of times. Optimal control of a finite element limitedarea shallow water equations model is explored with a view to apply variational data assimilation(VDA) by obtaining the minimum of a functional estimating the discrepancy between the model solutions and distributed observations. In our application, some simplified hypotheses are used, namely the error of the model is neglected, only the initial conditions are considered as the control variables, lateral boundary conditions are periodic and finally the observations are assumed to be distributed in space and time. Derivation of the optimality system including the adjoint state, permits computing the gradient of the cost functional with respect to the initial conditions which are used as control variables in the optimization. Different numerical aspects related to the construction of the adjoint model and verification of its correctness are addressed. The data assimilation setup is tested for various mesh resolutions scenarios and different time steps using a modular computer code. Finally, impact of largescale unconstrained minimization solvers LBFGS is assessed for various lengths of the time windows. We then attempt to obtain a reducedorder model (ROM) of above inverse problem, based on proper orthogonal decomposition(POD), referred to as POD 4D Var. Different approaches of POD implementation of the reduced inverse problem are compared, including a dualweighed method for snapshot selection coupled with a trustregion POD approach. Numerical results obtained point to an improved accuracy in all metrics tested when dualweighing choice of snapshots is combined with POD adaptivity of the trustregion type. Results of adhoc adaptivity of the POD 4D Var turn out to yield less accurate results than trustregion POD when compared with highfidelity model. Finally, we study solutions of an inverse problem for a global shallow water model controlling its initial conditions specified from the 40yr ECMWF ReAnalysis (ERA40) datasets, in presence of full or incomplete observations being assimilated in a time interval (window of assimilation) presence of background error covariance terms. As an extension of this research, we attempt to obtain a reducedorder model of above inverse problem, based on proper orthogonal decomposition (POD), referred to as POD 4D Var for a finite volume global shallow water equations model based on the LinRood fluxform semiLagrangian semiimplicit time integration scheme. Different approaches of POD implementation for the reduced inverse problem are compared, including a dualweighted method for snapshot selection coupled with a trustregion POD adaptivity approach. Numerical results with various observational densities and background error covariance operator are also presented. The POD 4D Var model results combined with the trust region adaptivity exhibit similarity in terms of various error metrics to the full 4D Var results, but are obtained using a significantly lesser number of minimization iterations and require lesser CPU time. Based on our previous and current research work, we conclude that POD 4D Var certainly warrants further studies, with promising potential for its extension to operational 3D numerical weather prediction models.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd3836
 Format
 Thesis
 Title
 Anova for Parameter Dependent Nonlinear PDEs and Numerical Methods for the Stochastic Stokes Equations.
 Creator

Chen, Zheng, Gunzburger, Max, Huﬀer, Fred, Peterson, Janet, Wang, Xiaoqiang, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation includes the application of analysisofvariance (ANOVA) expansions to analyze solutions of parameter dependent partial differential equations and the analysis and finite element approximations of the Stokes equations with stochastic forcing terms. In the first part of the dissertation, the impact of parameter dependent boundary conditions on the solutions of a class of nonlinear PDEs is considered. Based on the ANOVA expansions of functionals of the solutions, the effects...
Show moreThis dissertation includes the application of analysisofvariance (ANOVA) expansions to analyze solutions of parameter dependent partial differential equations and the analysis and finite element approximations of the Stokes equations with stochastic forcing terms. In the first part of the dissertation, the impact of parameter dependent boundary conditions on the solutions of a class of nonlinear PDEs is considered. Based on the ANOVA expansions of functionals of the solutions, the effects of different parameter sampling methods on the accuracy of surrogate optimization approaches to PDE constrained optimization is considered. The effects of the smoothness of the functional and the nonlinearity in the PDE on the decay of the higherorder ANOVA terms are studied. The concept of effective dimensions is used to determine the accuracy of the ANOVA expansions. Demonstrations are given to show that whenever truncated ANOVA expansions of functionals provide accurate approximations, optimizers found through a simple surrogate optimization strategy are also relatively accurate. The effects of several parameter sampling strategies on the accuracy of the surrogate optimization method are also considered; it is found that for this sparse sampling application, the Latin hypercube sampling method has advantages over other wellknown sampling methods. Although most of the results are presented and discussed in the context of surrogate optimization problems, they also apply to other settings such as stochastic ensemble methods and reducedorder modeling for nonlinear PDEs. In the second part of the dissertation, we study the numerical analysis of the Stokes equations driven by a stochastic process. The random processes we use are white noise, colored noise and the homogeneous Gaussian process. When the process is white noise, we deal with the singularity of matrix Green's functions in the form of mild solutions with the aid of the theory of distributions. We develop finite element methods to solve the stochastic Stokes equations. In the 2D and 3D cases, we derive error estimates for the approximate solutions. The results of numerical experiments are provided in the 2D case that demonstrate the algorithm and convergence rates. On the other hand, the singularity of the matrix Green's functions necessitates the use of the homogeneous Gaussian process. In the framework of theory of abstract Wiener spaces, the stochastic integrals with respect to the homogeneous Gaussian process can be defined on a larger space than L2 . With some conditions on the density function in the definition of the homogeneous Gaussian process, the matrix Green's functions have well defined integrals. We have studied the probability properties of this kind of integral and simulated discretized colored noise.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3851
 Format
 Thesis
 Title
 Finite Abelian Group Actions on Orientable Circle Bundles over Surfaces.
 Creator

Ibrahim, Caroline Maher Boulis, Heil, Wolfgang, Hollander, Myles, Hironaka, Eriko, Klassen, Eric, Department of Mathematics, Florida State University
 Abstract/Description

A finite group G acts freely on an orientable manifold M if each element of G is a homeomorphism of M, without fixed points, and the multiplication in G is the composition of homeomorphisms. The map from M to M/G of M to the orbit space is a regular cover map. Algebraically, associated with the Gaction is a surjective homomorphism from the fundamental group of M into G. Two Gactions are equivalent if there exists an orientation preserving homeomorphism on M, inducing the identity on G, that...
Show moreA finite group G acts freely on an orientable manifold M if each element of G is a homeomorphism of M, without fixed points, and the multiplication in G is the composition of homeomorphisms. The map from M to M/G of M to the orbit space is a regular cover map. Algebraically, associated with the Gaction is a surjective homomorphism from the fundamental group of M into G. Two Gactions are equivalent if there exists an orientation preserving homeomorphism on M, inducing the identity on G, that preserves the group action. This topological definition is translated to an algebraic definition as two Gactions are equivalent if and only if the associated surjections into G are equivalent via an automorphism of the fundamental group of M. For the manifolds M considered in this dissertation every automorphism of the fundamental group of M can be realized by a homeomorphism of M. Hence there is a onetoone correspondence between the topological and algebraic equivalence. The problem of classifying fixedpoint free finite abelian group actions on surfaces had been investigated by, among others, Nielsen, Smith and Zimmermann. Nielsen classifies cyclic actions on surfaces. He gives a list of automorphisms which he uses in his classification. Smith does the classification for special abelian groups. His approach is different from Nielsen's in the algebraic methods he uses. Zimmermann gives an algebraic solution to the classification of any finite abelian group action on closed surfaces. His technique is to get every surjective homomorphism from the fundamental group of the surface into G in normal form and then differentiate between the normal forms. In this dissertation we classify fixedpoint free finite abelian group actions on circle bundles. By results of Waldhausen every homeomorphism of M is isotopic to a fiber preserving homeomorphism; that is, it preserves the S1 factor of the bundle. This corresponds to the algebraic condition that any automorphism on the fundamental group of M preserves the center of the group. We use the same approach as that of Nielsen on surfaces. We give algorithms to bring every surjective homomorphism from the fundamental group of the bundle into the group G to normal form. From there we differentiate between the normal forms based on Nielsen's results. The results obtained are for circle bundles over surfaces of genus g greater than or equal to 2. A complete classification is given for the case that the circle bundle is a product bundle and G is a finite abelian group. We also obtain a complete classification of cyclic group actions and finite abelian group actions on circle bundles that are not product bundles.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd3887
 Format
 Thesis
 Title
 Mathematical Analysis of the Use of Trojan Sex Chromosomes as Means of Eradication of Invasive Species.
 Creator

Gutierrez, Juan B. (Juan Bernardo), Hurdal, Monica K., Travis, Joseph, Case, Bettye Anne, Quine, Jack, Sumners, DeWitt, Bertram, Richard, Cogan, Nick G., Department of...
Show moreGutierrez, Juan B. (Juan Bernardo), Hurdal, Monica K., Travis, Joseph, Case, Bettye Anne, Quine, Jack, Sumners, DeWitt, Bertram, Richard, Cogan, Nick G., Department of Mathematics, Florida State University
Show less  Abstract/Description

This dissertation presents and evaluates a theoretical method of eradication of invasive species through the use of Trojan Y chromosomes. The mathematical analysis of the Trojan Y chromosome eradication strategy is presented for the ODE case and the PDE case in R. It is shown that is possible to cause local extinction of species that have XY sex determination systems as long as they are susceptible to sex reversal. The existence of global attractors is shown for this system, and global...
Show moreThis dissertation presents and evaluates a theoretical method of eradication of invasive species through the use of Trojan Y chromosomes. The mathematical analysis of the Trojan Y chromosome eradication strategy is presented for the ODE case and the PDE case in R. It is shown that is possible to cause local extinction of species that have XY sex determination systems as long as they are susceptible to sex reversal. The existence of global attractors is shown for this system, and global attractors are proposed as descriptors of the dynamics of the infinite dimensional system. The case of Poecilia formosa is studied as a natural case of Trojan X chromosomes; it is shown in this case that the combination of stochasticdependent dissipation and high sensitivity to perturbations can lead to coexistence of P. formosa and P. mexicana. Similarities between the Trojan X chromosomes and Trojan Y chromosomes cases indicate that local extinction could occur in practice for the latter.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3892
 Format
 Thesis
 Title
 Analysis and Approximation of a TwoBand GinzburgLandau Model of Superconductivity.
 Creator

Chan, WanKan, Gunzburger, Max, Peterson, Janet, Manousakis, Efstratios, Wang, Xiaoming, Department of Mathematics, Florida State University
 Abstract/Description

In 2001, the discovery of the intermetallic compound superconductor MgB2 having a critical temperature of 39K stirred up great interest in using a generalization of the GinzburgLandau model, namely the twoband timedependent GinzburgLandau (2BTDGL) equations, to model the phenomena of twoband superconductivity. In this work, various mathematical and numerical aspects of the twodimensional, isothermal, isotropic 2BTDGL equations in the presence of a timedependent applied magnetic field...
Show moreIn 2001, the discovery of the intermetallic compound superconductor MgB2 having a critical temperature of 39K stirred up great interest in using a generalization of the GinzburgLandau model, namely the twoband timedependent GinzburgLandau (2BTDGL) equations, to model the phenomena of twoband superconductivity. In this work, various mathematical and numerical aspects of the twodimensional, isothermal, isotropic 2BTDGL equations in the presence of a timedependent applied magnetic field and a timedependent applied current are investigated. A new gauge is proposed to facilitate the inclusion of a timedependent current into the model. There are three parts in this work. First, the 2BTDGL model which includes a timedependent applied current is derived. Then, assuming sufficient smoothness of the boundary of the domain, the applied magnetic field, and the applied current, the global existence, uniqueness and boundedness of weak solutions of the 2BTDGL equations are proved. Second, the existence, uniqueness, and stability of finite element approximations of the solutions are shown and error estimates are derived. Third, numerical experiments are presented and compared to some known results which are related to MgB2 or general twoband superconductivity. Some novel behaviors are also identified.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3923
 Format
 Thesis
 Title
 The Fractal Nature of Lightning: An Investigation of the Fractal Relationship of the Structure of Lightning to Terrain.
 Creator

GrahamJones, Brian Clay, Hunter, Christopher, Elsner, James B., Bellenot, Steve, Department of Mathematics, Florida State University
 Abstract/Description

This study focuses on the relationship between the structure of lightning and how it may or may not be related to the topography below it.
 Date Issued
 2006
 Identifier
 FSU_migr_etd4055
 Format
 Thesis
 Title
 I. A Modified ƙƐ Turbulence Model for High Speed Hets at Elevated Temperatures. II. Modeling and a Computational Study of Spliced Acoustic Liners.
 Creator

Ganesan, Anand, Tam, Christopher K. W., Nh, HonKie, Hunter, Christopher, Navon, Ionel Michael, Sussman, Mark, Department of Mathematics, Florida State University
 Abstract/Description

A modification to the kepsilon model aimed to extend its applicability to the computation of the mean flow and noise of highspeed hot jets is proposed. The motivation of the proposal arises from the observation that there is a large density induced increase in the growth rate of spatial instabilities of a mixing layer if the lighter fluid moves faster. This consideration leads to the incorporation of a density gradient related contribution to the turbulent eddy viscosity of the kepsilon...
Show moreA modification to the kepsilon model aimed to extend its applicability to the computation of the mean flow and noise of highspeed hot jets is proposed. The motivation of the proposal arises from the observation that there is a large density induced increase in the growth rate of spatial instabilities of a mixing layer if the lighter fluid moves faster. This consideration leads to the incorporation of a density gradient related contribution to the turbulent eddy viscosity of the kepsilon model. Computed jet mean flow profiles and centerline velocity distributions at elevated temperatures of highspeed jets are found to be in better agreement with experimental measurements if density modification is included. Noise predictions including density effect are also found to be in better agreement with microphone measurements. The good agreements offer strong support to the validity and usefulness of the proposed density correction formula. A timedomain computational methodology has been deveoped to study the propagation and acoustic scattering of fan tones by spliced liners. The front portion of the engine is modelled as a duct. Significant acoustic scattering is observed for a frequency pretty close to cutoff. In this case, total scattered energy was found to be more than the energy in the incident mode. The spliced liners, in such conditions, are found to be less effective than the uniform liners. The performance of the liner was found to be dependent on the frequency. The results of the simulations agree qualitiatively well with the available experimental and theoretical work.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd4368
 Format
 Thesis