Current Search: Research Repository (x) » Thesis (x) » Atmospheric sciences (x) » Department of Mathematics (x)
Search results
Pages
- Title
- 3-Manifolds of S1-Category Three.
- Creator
-
Wang, Dongxu, Heil, Wolfgang, Niu, Xufeng, Klassen, Eric P., Hironaka, Eriko, Nichols, Warren D., Department of Mathematics, Florida State University
- Abstract/Description
-
I study 3-manifold theory, which is a fascinating research area in topology. Many new ideas and techniques were introduced during these years, which makes it an active and fast developing subject. It is one of the most fruitful branches of today's mathematics and with the solution of the Poincare conjecture, it is getting more attention. This dissertation is motivated by results about categorical properties for 3-manifolds. This can be rephrased as the study of 3-manifolds which can be...
Show moreI study 3-manifold theory, which is a fascinating research area in topology. Many new ideas and techniques were introduced during these years, which makes it an active and fast developing subject. It is one of the most fruitful branches of today's mathematics and with the solution of the Poincare conjecture, it is getting more attention. This dissertation is motivated by results about categorical properties for 3-manifolds. This can be rephrased as the study of 3-manifolds which can be covered by certain sets satisfying some homotopy properties. A special case is the problem of classifying 3-manifolds that can be covered by three simple S1-contractible subsets. S1-contractible subsets are subsets of a 3-manifold M3 that can be deformed into a circle in M3. In this thesis, I consider more geometric subsets with this property, namely subsets are homeomorphic to 3-balls, solid tori and solid Klein bottles. The main result is a classication of all closed 3-manifolds that can be obtained as a union of three solid Klein bottles.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7650
- Format
- Thesis
- Title
- 4-D Var Data Assimilation and POD Model Reduction Applied to Geophysical Dynamics Models.
- Creator
-
Chen, Xiao, Navon, Ionel Michael, Sussman, Mark, Hart, Robert, Wang, Xiaoming, Gordon, Erlebacher, Department of Mathematics, Florida State University
- Abstract/Description
-
Standard spatial discretization schemes for dynamical system (DS), usually lead to large-scale, high-dimensional, and in general, nonlinear systems of ordinary differential equations.Due to limited computational and storage capabilities, Reduced Order Modeling (ROM) techniques from system and control theory provide an attractive approach to approximate the large-scale discretized state equations using low-dimensional models. The objective of 4-D variational data assimilation (4-D Var) is to...
Show moreStandard spatial discretization schemes for dynamical system (DS), usually lead to large-scale, high-dimensional, and in general, nonlinear systems of ordinary differential equations.Due to limited computational and storage capabilities, Reduced Order Modeling (ROM) techniques from system and control theory provide an attractive approach to approximate the large-scale discretized state equations using low-dimensional models. The objective of 4-D variational data assimilation (4-D Var) is to obtain the minimum of a cost functional estimating the discrepancy between the model solutions and distributed observations in time and space. A control reduction methodology based on Proper Orthogonal Decomposition (POD), referred to as POD 4-D Var, has been widely used for nonlinear systems with tractable computations. However, the appropriate criteria for updating a POD ROM are not yet known in the application to optimal control. This is due to the limited validity of the POD ROM for inverse problems. Therefore, the classical Trust-Region (TR) approach combined with POD (TRPOD) was recently proposed as a way to alleviate the above difficulties. There is a global convergence result for TR, and benefiting from the trust-region philosophy, rigorous convergence results guarantee that the iterates produced by the TRPOD algorithm will converge to the solution of the original optimization problem. In order to reduce the POD basis size and still achieve the global convergence, a method was proposed to incorporate information from the 4-D Var system into the ROM procedure by implementing a dual weighted POD (DWPOD) method. The first new contribution in my dissertation consists in studying a new methodology combining the dual weighted snapshots selection and trust region POD adaptivity (DWTRPOD). Another new contribution is to combine the incremental POD 4-D Var, balanced truncation techniques and method of snapshots methodology. In the linear DS, this is done by integrating the linear forward model many times using different initial conditions in order to construct an ensemble of snapshots so as to generate the forward POD modes. Then those forward POD modes will serve as the initial conditions for its corresponding adjoint system. We then integrate the adjoint system a large number of times based on different initial conditions generated by the forward POD modes to construct an ensemble of adjoint snapshots. From this ensemble of adjoint snapshots, we can generate an ensemble of so-called adjoint POD modes. Thus we can approximate the controllability Grammian of the adjoint system instead of solving the computationally expensive coupled Lyapunov equations. To sum up, in the incremental POD 4-D Var, we can approximate the controllability Grammian by integrating the TLM a number of times and approximate observability Grammian by integrating its adjoint also a number of times. A new idea contributed in this dissertation is to extend the snapshots based POD methodology to the nonlinear system. Furthermore, we modify the classical algorithms in order to save the computations even more significantly. We proposed a novel idea to construct an ensemble of snapshots by integrating the tangent linear model (TLM) only once, based on which we can obtain its TLM POD modes. Then each TLM POD mode will be used as an initial condition to generate a small ensemble of adjoint snapshots and their adjoint POD modes. Finally, we can construct a large ensemble of adjoint POD modes by putting together each small ensemble of adjoint POD modes. To sum up, our idea in a forthcoming study is to test approximations of the controllability Grammian by integrating TLM once and observability Grammian by integrating adjoint model a reduced number of times. Optimal control of a finite element limited-area shallow water equations model is explored with a view to apply variational data assimilation(VDA) by obtaining the minimum of a functional estimating the discrepancy between the model solutions and distributed observations. In our application, some simplified hypotheses are used, namely the error of the model is neglected, only the initial conditions are considered as the control variables, lateral boundary conditions are periodic and finally the observations are assumed to be distributed in space and time. Derivation of the optimality system including the adjoint state, permits computing the gradient of the cost functional with respect to the initial conditions which are used as control variables in the optimization. Different numerical aspects related to the construction of the adjoint model and verification of its correctness are addressed. The data assimilation set-up is tested for various mesh resolutions scenarios and different time steps using a modular computer code. Finally, impact of large-scale unconstrained minimization solvers L-BFGS is assessed for various lengths of the time windows. We then attempt to obtain a reduced-order model (ROM) of above inverse problem, based on proper orthogonal decomposition(POD), referred to as POD 4-D Var. Different approaches of POD implementation of the reduced inverse problem are compared, including a dual-weighed method for snapshot selection coupled with a trust-region POD approach. Numerical results obtained point to an improved accuracy in all metrics tested when dual-weighing choice of snapshots is combined with POD adaptivity of the trust-region type. Results of ad-hoc adaptivity of the POD 4-D Var turn out to yield less accurate results than trust-region POD when compared with high-fidelity model. Finally, we study solutions of an inverse problem for a global shallow water model controlling its initial conditions specified from the 40-yr ECMWF Re-Analysis (ERA-40) datasets, in presence of full or incomplete observations being assimilated in a time interval (window of assimilation) presence of background error covariance terms. As an extension of this research, we attempt to obtain a reduced-order model of above inverse problem, based on proper orthogonal decomposition (POD), referred to as POD 4-D Var for a finite volume global shallow water equations model based on the Lin-Rood flux-form semi-Lagrangian semi-implicit time integration scheme. Different approaches of POD implementation for the reduced inverse problem are compared, including a dual-weighted method for snapshot selection coupled with a trust-region POD adaptivity approach. Numerical results with various observational densities and background error covariance operator are also presented. The POD 4-D Var model results combined with the trust region adaptivity exhibit similarity in terms of various error metrics to the full 4-D Var results, but are obtained using a significantly lesser number of minimization iterations and require lesser CPU time. Based on our previous and current research work, we conclude that POD 4-D Var certainly warrants further studies, with promising potential for its extension to operational 3-D numerical weather prediction models.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-3836
- Format
- Thesis
- Title
- Adaptive Spectral Element Methods to Price American Options.
- Creator
-
Willyard, Matthew, Kopriva, David, Eugenio, Paul, Case, Bettye Anne, Gallivan, Kyle, Nolder, Craig, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
We develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or de-refinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre...
Show moreWe develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or de-refinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre coefficients and the global error estimator is based on the adjoint problem. One local error estimator uses the rate of decay of the Legendre coefficients to estimate the error. The other local error estimator compares the solution to an estimated solution using fewer Legendre coefficients found by the Tau method. The global error estimator solves the adjoint problem to weight local error estimates to approximate a terminal error functional. Both types of error estimators produce meshes that match expectations by being fine near the early exercise boundary and strike price and coarse elsewhere. The produced meshes also adapt as expected by de-refining near the strike price as the solution smooths and staying fine near the moving early exercise boundary. Both types of error estimators also give solutions whose error is within prescribed tolerances. The adjoint-based error estimator is more flexible, but costs up to three times as much as using the local error estimate alone. The global error estimator has the advantages of tracking the accumulation of error in time and being able to discount large local errors that do not affect the chosen terminal error functional. The local error estimator is cheaper to compute because the global error estimator has the added cost of solving the adjoint problem.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-0892
- Format
- Thesis
- Title
- Algorithms for Computing Congruences Between Modular Forms.
- Creator
-
Heaton, Randy, Agashe, Amod, Van Hoeij, Mark, Capstick, Simon, Aldrovandi, Ettore, Department of Mathematics, Florida State University
- Abstract/Description
-
Let $N$ be a positive integer. We first discuss a method for computing intersection numbers between subspaces of $S_{2}(Gamma_{0}(N),C)$. Then we present a new method for computing a basis of q-expansions for $S_{2}(Gamma_{0}(N),Q)$, describe an algorithm for saturating such a basis in $S_{2}(Gamma_{0}(N),Z)$, and show how these results have applications to computing congruence primes and studying cancellations in the conjectural Birch and Swinnerton-Dyer formula.
- Date Issued
- 2012
- Identifier
- FSU_migr_etd-4904
- Format
- Thesis
- Title
- Algorithms for Solving Linear Differential Equations with Rational Function Coefficients.
- Creator
-
Imamoglu, Erdal, van Hoeij, Mark, van Engelen, Robert, Agashe, Amod S. (Amod Sadanand), Aldrovandi, Ettore, Aluffi, Paolo, Florida State University, College of Arts and Sciences...
Show moreImamoglu, Erdal, van Hoeij, Mark, van Engelen, Robert, Agashe, Amod S. (Amod Sadanand), Aldrovandi, Ettore, Aluffi, Paolo, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
This thesis introduces two new algorithms to find hypergeometric solutions of second order regular singular differential operators with rational function or polynomial coefficients. Algorithm 3.2.1 searches for solutions of type: exp(∫ r dx) ⋅ ₂F₁ (a₁,a₂;b₁;f) and Algorithm 5.2.1 searches for solutions of type exp(∫ r dx) (r₀ ⋅ ₂F₁(a₁,a₂;b₁;f) + r₁ ⋅ ₂F´₁ (a₁,a₂;b₁;f)) where f, r, r₀, r₁ ∈ ℚ̅(̅x̅)̅ and a₁,a₂,b₁ ∈ ℚ and denotes the Gauss hypergeometric function. The algorithms use modular...
Show moreThis thesis introduces two new algorithms to find hypergeometric solutions of second order regular singular differential operators with rational function or polynomial coefficients. Algorithm 3.2.1 searches for solutions of type: exp(∫ r dx) ⋅ ₂F₁ (a₁,a₂;b₁;f) and Algorithm 5.2.1 searches for solutions of type exp(∫ r dx) (r₀ ⋅ ₂F₁(a₁,a₂;b₁;f) + r₁ ⋅ ₂F´₁ (a₁,a₂;b₁;f)) where f, r, r₀, r₁ ∈ ℚ̅(̅x̅)̅ and a₁,a₂,b₁ ∈ ℚ and denotes the Gauss hypergeometric function. The algorithms use modular reduction, Hensel lifting, rational function reconstruction, and rational number reconstruction to do so. Numerous examples from different branches of science (mostly from combinatorics and physics) showed that the algorithms presented in this thesis are very effective. Presently, Algorithm 5.2.1 is the most general algorithm in the literature to find hypergeometric solutions of such operators. This thesis also introduces a fast algorithm (Algorithm 4.2.3) to find integral bases for arbitrary order regular singular differential operators with rational function or polynomial coefficients. A normalized (Algorithm 4.3.1) integral basis for a differential operator provides us transformations that convert the differential operator to its standard forms (Algorithm 5.1.1) which are easier to solve.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Imamoglu_fsu_0071E_13942
- Format
- Thesis
- Title
- All Speed Multi-Phase Flow Solvers.
- Creator
-
Kadioglu, Samet Y., Sussman, Mark, Telotte, John, Hussaini, Yousuff, Wang, Qi, Erlebacher, Gordon, Department of Mathematics, Florida State University
- Abstract/Description
-
A new second order primitive preconditioner technique (an all speed method) for solving all speed single/multi-phase flow is presented. With this technique, one can compute both compressible and incompressible flows with Mach-uniform accuracy and efficiency (i.e., accuracy and efficiency of the method are independent of Mach numbers). The new primitive preconditioner (all speed/Mach uniform) technique can handle both strong and weak shocks, providing highly resolved shock solutions together...
Show moreA new second order primitive preconditioner technique (an all speed method) for solving all speed single/multi-phase flow is presented. With this technique, one can compute both compressible and incompressible flows with Mach-uniform accuracy and efficiency (i.e., accuracy and efficiency of the method are independent of Mach numbers). The new primitive preconditioner (all speed/Mach uniform) technique can handle both strong and weak shocks, providing highly resolved shock solutions together with correct shock speeds. In addition, the new technique performs very well at the zero Mach limit. In the case of multi-phase flow, the new primitive preconditioner technique enables one to accurately treat phase boundaries in which there is a large impedance mismatch. When solving multi-dimensional all speed multi-phase flows, we introduce adaptive solution techniques which exploit the advantages of Mach-uniform methods. We compute a variety of problems from low (low speed) to high Mach number (high speed) flows including multi-phase flow tests, i.e, computing the growth and collapse of adiabatic bubbles for study of underwater explosions
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-3391
- Format
- Thesis
- Title
- Alternative Models for Stochastic Volatility Corrections for Equity and Interest Rate Derivatives.
- Creator
-
Liang, Tianyu, Kercheval, Alec N., Wang, Xiaoming, Liu, Ewald, Brian, Nichols, Warren D., Department of Mathematics, Florida State University
- Abstract/Description
-
A lot of attention has been paid to the stochastic volatility model where the volatility is randomly fluctuating driven by an additional Brownian motion. In our work, we change the mean level in the mean-reverting process from a constant to a function of the underlying process. We apply our models to the pricing of both equity and interest rate derivatives. Throughout the thesis, a singular perturbation method is employed to derive closed-form formulas up to first order asymptotic solutions....
Show moreA lot of attention has been paid to the stochastic volatility model where the volatility is randomly fluctuating driven by an additional Brownian motion. In our work, we change the mean level in the mean-reverting process from a constant to a function of the underlying process. We apply our models to the pricing of both equity and interest rate derivatives. Throughout the thesis, a singular perturbation method is employed to derive closed-form formulas up to first order asymptotic solutions. We also implement multiplicative noise to arithmetic Ornstein-Uhlenbeck process to produce a wider variety of effects. Calibration and Monte Carlo simulation results show that the proposed model outperform Fouque's original stochastic volatility model during some particular window in history. A more efficient numerical scheme, the heterogeneous multi-scale method (HMM), is introduced to simulate the multi-scale differential equations discussed over the chapters.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4990
- Format
- Thesis
- Title
- Analysis and Approximation of a Two-Band Ginzburg-Landau Model of Superconductivity.
- Creator
-
Chan, Wan-Kan, Gunzburger, Max, Peterson, Janet, Manousakis, Efstratios, Wang, Xiaoming, Department of Mathematics, Florida State University
- Abstract/Description
-
In 2001, the discovery of the intermetallic compound superconductor MgB2 having a critical temperature of 39K stirred up great interest in using a generalization of the Ginzburg-Landau model, namely the two-band time-dependent Ginzburg-Landau (2B-TDGL) equations, to model the phenomena of two-band superconductivity. In this work, various mathematical and numerical aspects of the two-dimensional, isothermal, isotropic 2B-TDGL equations in the presence of a time-dependent applied magnetic field...
Show moreIn 2001, the discovery of the intermetallic compound superconductor MgB2 having a critical temperature of 39K stirred up great interest in using a generalization of the Ginzburg-Landau model, namely the two-band time-dependent Ginzburg-Landau (2B-TDGL) equations, to model the phenomena of two-band superconductivity. In this work, various mathematical and numerical aspects of the two-dimensional, isothermal, isotropic 2B-TDGL equations in the presence of a time-dependent applied magnetic field and a time-dependent applied current are investigated. A new gauge is proposed to facilitate the inclusion of a time-dependent current into the model. There are three parts in this work. First, the 2B-TDGL model which includes a time-dependent applied current is derived. Then, assuming sufficient smoothness of the boundary of the domain, the applied magnetic field, and the applied current, the global existence, uniqueness and boundedness of weak solutions of the 2B-TDGL equations are proved. Second, the existence, uniqueness, and stability of finite element approximations of the solutions are shown and error estimates are derived. Third, numerical experiments are presented and compared to some known results which are related to MgB2 or general two-band superconductivity. Some novel behaviors are also identified.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-3923
- Format
- Thesis
- Title
- An Analysis of Conjugate Harmonic Components of Monogenic Functions and Lambda Harmonic Functions.
- Creator
-
Ballenger-Fazzone, Brendon Kerr, Nolder, Craig, Harper, Kristine, Aldrovandi, Ettore, Case, Bettye Anne, Quine, J. R. (John R.), Ryan, John Barry, Florida State University,...
Show moreBallenger-Fazzone, Brendon Kerr, Nolder, Craig, Harper, Kristine, Aldrovandi, Ettore, Case, Bettye Anne, Quine, J. R. (John R.), Ryan, John Barry, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Clifford analysis is seen as the higher dimensional analogue of complex analysis. This includes a rich study of Clifford algebras and, in particular, monogenic functions, or Clifford-valued functions that lie in the kernel of the Cauchy-Riemann operator. In this dissertation, we explore the relationships between the harmonic components of monogenic functions and expand upon the notion of conjugate harmonic functions. We show that properties of the even part of a Clifford-valued function...
Show moreClifford analysis is seen as the higher dimensional analogue of complex analysis. This includes a rich study of Clifford algebras and, in particular, monogenic functions, or Clifford-valued functions that lie in the kernel of the Cauchy-Riemann operator. In this dissertation, we explore the relationships between the harmonic components of monogenic functions and expand upon the notion of conjugate harmonic functions. We show that properties of the even part of a Clifford-valued function determine properties of the odd part and vice versa. We also explore the theory of functions lying in the kernel of a generalized Laplace operator, the λ-Laplacian. We explore the properties of these so-called λ-harmonic functions and give the solution to the Dirichlet problem for the λ-harmonic functions on annular domains in Rⁿ.
Show less - Date Issued
- 2016
- Identifier
- FSU_2016SP_BallengerFazzone_fsu_0071E_13136
- Format
- Thesis
- Title
- Analysis of Functions of Split-Complex, Multicomplex, and Split-Quaternionic Variables and Their Associated Conformal Geometries.
- Creator
-
Emanuello, John Anthony, Nolder, Craig, Tabor, Samuel Lynn, Case, Bettye Anne, Quine, J. R. (John R.), Florida State University, College of Arts and Sciences, Department of...
Show moreEmanuello, John Anthony, Nolder, Craig, Tabor, Samuel Lynn, Case, Bettye Anne, Quine, J. R. (John R.), Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
The connections between algebra, geometry, and analysis have led the way for numerous results in many areas of mathematics, especially complex analysis. Considerable effort has been made to develop higher dimensional analogues of the complex numbers, such as Clifford algebras and Multicomplex numbers. These rely heavily on geometric notions, and we explore the analysis which results. This is what is called hyper-complex analysis. This dissertation explores the most prominent of these higher...
Show moreThe connections between algebra, geometry, and analysis have led the way for numerous results in many areas of mathematics, especially complex analysis. Considerable effort has been made to develop higher dimensional analogues of the complex numbers, such as Clifford algebras and Multicomplex numbers. These rely heavily on geometric notions, and we explore the analysis which results. This is what is called hyper-complex analysis. This dissertation explores the most prominent of these higher dimensional analogues and highlights a many of the relevant results which have appeared in the last four decades, and introduces new ideas which can be used to further the research of this discipline. Indeed, the objects of interest are Clifford algebras, the algebra of the Multicomplex numbers, and functions which are valued in these algebras and lie in the kernels of linear operators. These lead to prominent results in Clifford analysis and multicomplex analysis which can be viewed as analogues of complex analysis. Additionally, we explain the link between Clifford algebras and conformal geometry. We explore two low dimensional examples, namely the split-complex numbers and split-quaternions, and demonstrate how linear fractional transformations are conformal mappings in these settings.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9326
- Format
- Thesis
- Title
- Analysis of Orientational Restraints in Solid-State Nuclear Magnetic Resonance with Applications to Protein Structure Determination.
- Creator
-
Achuthan, Srisairam, Quine, John R., Cross, Timothy A., Sumners, DeWitt, Bertram, Richard, Department of Mathematics, Florida State University
- Abstract/Description
-
Of late, path-breaking advances are taking place and flourishing in the field of solid-state Nuclear Magnetic Resonance (ssNMR)spectroscopy. One of the major applications of ssNMR techniques is to high resolution three-dimensional structures of biological molecules like the membrane proteins. An explicit example of this is PISEMA (Polarization Inversion Spin Exchange at Magic Angle). This dissertation studies and analyzes the use of the orientational restraints in general, and particularly...
Show moreOf late, path-breaking advances are taking place and flourishing in the field of solid-state Nuclear Magnetic Resonance (ssNMR)spectroscopy. One of the major applications of ssNMR techniques is to high resolution three-dimensional structures of biological molecules like the membrane proteins. An explicit example of this is PISEMA (Polarization Inversion Spin Exchange at Magic Angle). This dissertation studies and analyzes the use of the orientational restraints in general, and particularly the restraints measured through PISEMA. Here, we have applied our understanding of orientational restraints to briefly investigate the structure of Amantadine bound M2-TMD, a membrane protein in Influenza A Virus. We model the protein backbone structure as a discrete curve in space with atoms represented by vertices and covalent bonds connecting them as the edges. The oriented structure of this curve with respect to an external vector is emphasized. The map from the surface of the unit sphere to the PISEMA frequency plane is examined in detail. The image is a powder pattern in the frequency plane. A discussion of the resulting image is provided. Solutions to PISEMA equations lead to multiple orientations for the magnetic field vector for a given point in the frequency plane. These are duly captured by sign degeneracies for the vector coordinates. The intensity of NMR powder patterns is formulated in terms of a probability density function for 1-d spectra and a joint probability density function for the 2-d spectra. The intensity analysis for 2-d spectra is found to be rather helpful in addressing the robustness of the PISEMA data. To build protein structures by gluing together diplanes, certain necessary conditions have to be met. We formulate these as continuity conditions to be realized for diplanes. The number of oriented protein structures has been enumerated in the degeneracy framework for diplanes. Torsion angles are expressed via sign degeneracies. For aligned protein samples, the PISA wheel approach to modeling the protein structure is adopted. Finally, an atomic model of the monomer structure of M2-TMD with Amantadine has been elucidated based on PISEMA orientational restraints. This is a joint work with Jun Hu and Tom Asbury. The PISEMA data was collected by Jun Hu and the molecular modeling was performed by Tom Asbury.
Show less - Date Issued
- 2006
- Identifier
- FSU_migr_etd-0109
- Format
- Thesis
- Title
- Analysis of Regularity and Convergence of Discretization Methods for the Stochastic Heat Equation Forced by Space-Time White Noise.
- Creator
-
Wills, Anthony Clinton, Wang, Xiaoming, Ewald, Brian D., Reina, Laura, Bowers, Philip L., Case, Bettye Anne, Ökten, Giray, Florida State University, College of Arts and Sciences...
Show moreWills, Anthony Clinton, Wang, Xiaoming, Ewald, Brian D., Reina, Laura, Bowers, Philip L., Case, Bettye Anne, Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
We consider the heat equation forced by a space-time white noise and with periodic boundary conditions in one dimension. The equation is discretized in space using four different methods; spectral collocation, spectral truncation, finite differences, and finite elements. For each of these methods we derive a space-time white noise approximation and a formula for the covariance structure of the solution to the discretized equation. The convergence rates are analyzed for each of the methods as...
Show moreWe consider the heat equation forced by a space-time white noise and with periodic boundary conditions in one dimension. The equation is discretized in space using four different methods; spectral collocation, spectral truncation, finite differences, and finite elements. For each of these methods we derive a space-time white noise approximation and a formula for the covariance structure of the solution to the discretized equation. The convergence rates are analyzed for each of the methods as the spatial discretization becomes arbitrarily fine and this is confirmed numerically. Dirichlet and Neumann boundary conditions are also considered. We then derive covariance structure formulas for the two dimensional stochastic heat equation using each of the different methods. In two dimensions the solution does not have a finite variance and the formulas for the covariance structure using different methods does not agree in the limit. This means we must analyze the convergence in a different way than the one dimensional problem. To understand this difference in the solution as the spatial dimension increases, we find the Sobolev space in which the approximate solution converges to the solution in one and two dimensions. This result is then generalized to n dimensions. This gives a precise statement about the regularity of the solution as the spatial dimension increases. Finally, we consider a generalization of the stochastic heat equation where the forcing term is the spatial derivative of a space-time white noise. For this equation we derive formulas for the covariance structure of the discretized equation using the spectral truncation and finite difference method. Numerical simulation results are presented and some qualitative comparisons between these two methods are made.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9488
- Format
- Thesis
- Title
- Analysis of Two Partial Differential Equation Models in Fluid Mechanics: Nonlinear Spectral Eddy-Viscosity Model of Turbulence and Infinite-Prandtl-Number Model of Mantle Convection.
- Creator
-
Saka, Yuki, Gunzburger, Max D., Wang, Xiaoming, El-Azab, Anter, Peterson, Janet, Wang, Xiaoqiang, Department of Mathematics, Florida State University
- Abstract/Description
-
This thesis presents two problems in the mathematical and numerical analysis of partial differential equations modeling fluids. The first is related to modeling of turbulence phenomena. One of the objectives in simulating turbulence is to capture the large scale structures in the flow without explicitly resolving the small scales numerically. This is generally accomplished by adding regularization terms to the Navier-Stokes equations. In this thesis, we examine the spectral viscosity models...
Show moreThis thesis presents two problems in the mathematical and numerical analysis of partial differential equations modeling fluids. The first is related to modeling of turbulence phenomena. One of the objectives in simulating turbulence is to capture the large scale structures in the flow without explicitly resolving the small scales numerically. This is generally accomplished by adding regularization terms to the Navier-Stokes equations. In this thesis, we examine the spectral viscosity models in which only the high-frequency spectral modes are regularized. The objective is to retain the large-scale dynamics while modeling the turbulent fluctuations accurately. The spectral regularization introduces a host of parameters to the model. In this thesis, we rigorously justify effective choices of parameters. The other problem is related to modeling of the mantle flow in the Earth's interior. We study a model equation derived from the Boussinesq equation where the Prandtl number is taken to infinity. This essentially models the flow under the assumption of a large viscosity limit. The novelty in our problem formulation is that the viscosity depends on the temperature field, which makes the mathematical analysis non-trivial. Compared to the constant viscosity case, variable viscosity introduces a second-order nonlinearity which makes the mathematical question of well-posedness more challenging. Here, we prove this using tools from the regularity theory of parabolic partial differential equations.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-2108
- Format
- Thesis
- Title
- An Analytic Approach to Estimating the Required Surplus, Benchmark Profit, and Optimal Reinsurance Retention for an Insurance Enterprise.
- Creator
-
Boor, Joseph A. (Joseph Allen), Born, Patricia, Case, Bettye Anne, Tang, Qihe, Rogachev, Grigory, Okten, Giray, Aldrovandi, Ettore, Paris, Steve, Department of Mathematics,...
Show moreBoor, Joseph A. (Joseph Allen), Born, Patricia, Case, Bettye Anne, Tang, Qihe, Rogachev, Grigory, Okten, Giray, Aldrovandi, Ettore, Paris, Steve, Department of Mathematics, Florida State University
Show less - Abstract/Description
-
This paper presents an analysis of the capital needs, needed return on capital, and optimum reinsurance retention for insurance companies, all in the context where claims are either paid out or known with certainty within or soon after the policy period. Rather than focusing on how to estimate such values using Monte Carlo simulation, it focuses on closed form expressions and approximations for key quantities that are needed for such an analysis. Most of the analysis is also done using a...
Show moreThis paper presents an analysis of the capital needs, needed return on capital, and optimum reinsurance retention for insurance companies, all in the context where claims are either paid out or known with certainty within or soon after the policy period. Rather than focusing on how to estimate such values using Monte Carlo simulation, it focuses on closed form expressions and approximations for key quantities that are needed for such an analysis. Most of the analysis is also done using a distribution-free approach with respect to the loss severity distribution, so minimal or no assumptions surrounding the specific distribution are needed when analyzing the results. However, one key parameter, that is treated via an exhaustion of cases, involves the degree of parameter uncertainty, the number of separate lines of business involved. This is done for the no parameter uncertainty monoline compound Poisson distribution as well as situations involving (lognormal) severity parameter uncertainty, (gamma/negative binomial) count parameter uncertainty, the multiline compound Poisson case, and the compound Poisson scenario with parameter uncertainty, and especially parameter uncertainty correlated across the lines of business. It shows how the risk of extreme aggregate losses that is inherent in insurance operations may be understood (and, implicitly, managed) by performing various calculations using the loss severity distribution, and, where appropriate, key parameters driving the parameter uncertainty distributions. Formulas are developed that estimate the capital and surplus needs of a company(using the VaR approach), and therefore the profit needs of a company that involve tractable calculations. As part of that the process the benchmark loading for profit, reflecting both the needed financial support for the amount of capital to adequately secure to a given one year survival probability, and the amount needed to recompense investors for diversifiable risk is discussed. An analysis of whether or not the loading for diversifiable risk is needed is performed. Approximations to the needed values are performed using the moments of the capped severity distribution and analytic formulas from the frequency distribution as inputs into method of moments normal and lognormal approximations to the percentiles of the aggregate loss distribution. An analysis of the optimum reinsurance retention/policy limit is performed as well, with capped loss distribution/frequency distribution equations resulting from the relationship that the marginal profit (with respect to the loss cap) should be equal to the marginal expense and profit dollar loading with respect to the loss cap. Analytical expressions are developed for the optimum reinsurance retention. Approximations to the optimum retention based on the normal distribution were developed and their error analyzed in great detail. The results indicate that in the vast majority of practical scenarios, the normal distribution approximation to the optimum retention is acceptable. Also included in the paper is a brief comparison of the VaR (survival probability) and expected policyholder deficit (EPD) and TVaR approaches to surplus adequacy (which conclude that the VaR approach is superior for most property/casualty companies); a mathematical analysis of the propriety of insuring the upper limits of the loss distribution, which concludes that, even if unlimited funds were available to secure losses in capital and reinsurance, it would not be in the insured's best interest to do so. Further inclusions to date include a illustrative derivation of the generalized collective risk equation and a method for interpolating ``along'' a mathematical curve rather than directly using the values on the curve. As a prelude to a portion of the analysis, a theorem was proven indicating that in most practical situations, the n-1st order derivatives of a suitable probability mass function at values L, when divided by the product of L and the nth order derivative, generate a quotient with a limit at infinity that is less than 1/n.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4726
- Format
- Thesis
- Title
- Anova for Parameter Dependent Nonlinear PDEs and Numerical Methods for the Stochastic Stokes Equations.
- Creator
-
Chen, Zheng, Gunzburger, Max, Huffer, Fred, Peterson, Janet, Wang, Xiaoqiang, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation includes the application of analysis-of-variance (ANOVA) expansions to analyze solutions of parameter dependent partial differential equations and the analysis and finite element approximations of the Stokes equations with stochastic forcing terms. In the first part of the dissertation, the impact of parameter dependent boundary conditions on the solutions of a class of nonlinear PDEs is considered. Based on the ANOVA expansions of functionals of the solutions, the effects...
Show moreThis dissertation includes the application of analysis-of-variance (ANOVA) expansions to analyze solutions of parameter dependent partial differential equations and the analysis and finite element approximations of the Stokes equations with stochastic forcing terms. In the first part of the dissertation, the impact of parameter dependent boundary conditions on the solutions of a class of nonlinear PDEs is considered. Based on the ANOVA expansions of functionals of the solutions, the effects of different parameter sampling methods on the accuracy of surrogate optimization approaches to PDE constrained optimization is considered. The effects of the smoothness of the functional and the nonlinearity in the PDE on the decay of the higher-order ANOVA terms are studied. The concept of effective dimensions is used to determine the accuracy of the ANOVA expansions. Demonstrations are given to show that whenever truncated ANOVA expansions of functionals provide accurate approximations, optimizers found through a simple surrogate optimization strategy are also relatively accurate. The effects of several parameter sampling strategies on the accuracy of the surrogate optimization method are also considered; it is found that for this sparse sampling application, the Latin hypercube sampling method has advantages over other well-known sampling methods. Although most of the results are presented and discussed in the context of surrogate optimization problems, they also apply to other settings such as stochastic ensemble methods and reduced-order modeling for nonlinear PDEs. In the second part of the dissertation, we study the numerical analysis of the Stokes equations driven by a stochastic process. The random processes we use are white noise, colored noise and the homogeneous Gaussian process. When the process is white noise, we deal with the singularity of matrix Green's functions in the form of mild solutions with the aid of the theory of distributions. We develop finite element methods to solve the stochastic Stokes equations. In the 2D and 3D cases, we derive error estimates for the approximate solutions. The results of numerical experiments are provided in the 2D case that demonstrate the algorithm and convergence rates. On the other hand, the singularity of the matrix Green's functions necessitates the use of the homogeneous Gaussian process. In the framework of theory of abstract Wiener spaces, the stochastic integrals with respect to the homogeneous Gaussian process can be defined on a larger space than L2 . With some conditions on the density function in the definition of the homogeneous Gaussian process, the matrix Green's functions have well defined integrals. We have studied the probability properties of this kind of integral and simulated discretized colored noise.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-3851
- Format
- Thesis
- Title
- Applications of Representation Theory and Higher-Order Perturbation Theory in NMR.
- Creator
-
Srinivasan, Parthasarathy, Quine, John R., Gan, Zhehong, Chapman, Michael S., Bowers, Philip, Sumners, DeWitt, Department of Mathematics, Florida State University
- Abstract/Description
-
Solid State Nuclear Magnetic Resonance (NMR) is perhaps the only spectroscopic technique that allows experimentalists to manipulate the spin systems they are interested in. Of particular interest are nuclei with spins greater than 1/2, or quadrupolar nuclei, as they constitute over 70% of the magnetically active spins. Two of the important mathematical tools used in the theory of studying NMR are representation theory together with perturbation theory. We will use both these tools to describe...
Show moreSolid State Nuclear Magnetic Resonance (NMR) is perhaps the only spectroscopic technique that allows experimentalists to manipulate the spin systems they are interested in. Of particular interest are nuclei with spins greater than 1/2, or quadrupolar nuclei, as they constitute over 70% of the magnetically active spins. Two of the important mathematical tools used in the theory of studying NMR are representation theory together with perturbation theory. We will use both these tools to describe the underlying mathematical theory for quadrupolar nuclei. The theory shows that for non-symmetric satellite transitions in half-integer quadrupolar nuclei, perturbation effects up to third-order feature in the NMR spectra. We will also use irreducible representations to analyze experiments conducted on various spin systems and discuss ways to design new ones. Another topic that will also be explored is the theory of rotary resonance in half-integer quadrupolar nuclei. This theory explains why techniques like FASTER (FAster Spinning gives Transfer Enhancement at Rotary resonance) improve the efficiency of symmetric multiple quantum experiments.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-1600
- Format
- Thesis
- Title
- Approximating Nonlocal Diffusion Problems Using Quadrature Rules Generated by Radial Basis Functions.
- Creator
-
Lyngaas, Isaac Ron, Peterson, Janet S., Gunzburger, Max D., Burkardt, John V., Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
Nonlocal models differ from traditional partial differential equation (PDE) models because they contain no spatial derivatives; instead an appropriate integral is used. Nonlocal models are especially useful in the case where there are issues calculating the spatial derivatives of a PDE model. In many applications (e.g., biological systems, flow through porous media) the observed rate of diffusion is not accurately modeled by the standard diffusion differential operator but rather exhibits so...
Show moreNonlocal models differ from traditional partial differential equation (PDE) models because they contain no spatial derivatives; instead an appropriate integral is used. Nonlocal models are especially useful in the case where there are issues calculating the spatial derivatives of a PDE model. In many applications (e.g., biological systems, flow through porous media) the observed rate of diffusion is not accurately modeled by the standard diffusion differential operator but rather exhibits so-called anomalous diffusion. Anomalous diffusion can be represented in a PDE model by using a fractional Laplacian operator in space whereas the nonlocal approach only needs to slightly modify its integral formulation to model anomalous diffusion. Anomalous diffusion is one such case where approximating the spatial derivative operator is a difficult problem. In this work, an approach for approximating standard and anomalous nonlocal diffusion problems using a new technique that utilizes radial basis functions (RBFs) is introduced and numerically tested. The typical approach for approximating nonlocal diffusion problems is to use a Galerkin formulation. However, the Galerkin formulation for nonlocal diffusion problems can often be difficult to compute efficiently and accurately especially for problems in multiple dimensions. Thus, we investigate the alternate approach of using quadrature rules generated by RBFs to approximate the nonlocal diffusion problem. This work will be split into three major parts. The first will introduce RBFs and give some examples of how they are used. This part will motivate our approach for using RBFs on the nonlocal diffusion problem. In the second part, we will derive RBF-generated quadrature rules in one dimension and show they can be used to approximate nonlocal diffusion problems. The final part will address how the RBF quadrature approach can be extended to higher dimensional problems. Numerical test cases are shown for both the standard and anomalous nonlocal diffusion problems and compared with standard finite element approximations. Preliminary results show that the method introduced is viable for approximating nonlocal diffusion problems and that highly accurate approximations are possible using this approach.
Show less - Date Issued
- 2016
- Identifier
- FSU_FA2016_Lyngaas_fsu_0071N_13512
- Format
- Thesis
- Title
- Arithmetic Aspects of Noncommutative Geometry: Motives of Noncommutative Tori and Phase Transitions on GL(n) and Shimura Varieties Systems.
- Creator
-
Shen, Yunyi, Marcolli, Matilde, Aluffi, Paolo, Chicken, Eric, Bowers, Philip L., Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of...
Show moreShen, Yunyi, Marcolli, Matilde, Aluffi, Paolo, Chicken, Eric, Bowers, Philip L., Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
In this dissertation, we study three important cases in noncommutative geometry. We first observe the standard noncommutative object, noncommutative torus, in noncommutative motives. We work with the category of holomorphic bundles on a noncommutative torus, which is known to be equivalent to the heart of a nonstandard t-structure on coherent sheaves of an elliptic curve. We then introduce a notion of (weak) t-structure in dg categories. By lifting the nonstandard t-structure to the t...
Show moreIn this dissertation, we study three important cases in noncommutative geometry. We first observe the standard noncommutative object, noncommutative torus, in noncommutative motives. We work with the category of holomorphic bundles on a noncommutative torus, which is known to be equivalent to the heart of a nonstandard t-structure on coherent sheaves of an elliptic curve. We then introduce a notion of (weak) t-structure in dg categories. By lifting the nonstandard t-structure to the t-structure that we defined, we find a way of seeing a noncommutative torus in noncommutative motives. By applying the t-structure to a noncommutative torus and describing the cyclic homology of the category of holomorphic bundle on the noncommutative torus, we finally show that the periodic cyclic homology functor induces a decomposition of the motivic Galois group of the Tannakian category generated by the associated auxiliary elliptic curve. In the second case, we generalize the results of Laca, Larsen, and Neshveyev on the GL2-Connes-Marcolli system to the GLn-Connes-Marcolli systems. We introduce and define the GLn-Connes-Marcolli systems and discuss the existence and uniqueness questions of the KMS equilibrium states. Using the ergodicity argument and Hecke pair calculation, we classify the KMS states at different inverse temperatures β. Specifically, we show that in the range of n − 1 < β ≤ n, there exists only one KMS state. We prove that there are no KMS states when β < n − 1 and β ̸= 0, 1, . . . , n − 1,, while we actually construct KMS states for integer values of β in 1 ≤ β ≤ n − 1. For β > n, we characterize the extremal KMS states. In the third case, we push the previous results to more abstract settings. We mainly study the connected Shimura dynamical systems. We give the definition of the essential and superficial KMS states. We further develop a set of arithmetic tools to generalize the results in the previous case. We then prove the uniqueness of the essential KMS states and show the existence of the essential KMS stats for high inverse temperatures.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Shen_fsu_0071E_13982
- Format
- Thesis
- Title
- Asset Market Dynamics of Heterogeneous Agent Models with Learning.
- Creator
-
Guan, Yuanying, Beaumont, Paul M., Kercheval, Alec N., Marquis, Milton, Mesterton-Gibbons, Mike, Nichols, Warren D., Department of Mathematics, Florida State University
- Abstract/Description
-
The standard Lucas asset pricing model makes two common assumptions of homogeneous agents and rational expectations equilibrium. However, these assumptions are unrealistic for real financial markets. In this work, we relax these assumptions and establish a Lucas type agent-based asset pricing model. We create an artificial economy with a single risky asset and populate it with heterogeneous, boundedly rational, utility maximizing, infinitely lived and forward looking agents. We restrict...
Show moreThe standard Lucas asset pricing model makes two common assumptions of homogeneous agents and rational expectations equilibrium. However, these assumptions are unrealistic for real financial markets. In this work, we relax these assumptions and establish a Lucas type agent-based asset pricing model. We create an artificial economy with a single risky asset and populate it with heterogeneous, boundedly rational, utility maximizing, infinitely lived and forward looking agents. We restrict agents' information by allowing them to use only available information when they make optimal choices. With independent, identically distributed market returns, agents are able to compute their policy functions and the equilibrium pricing function with Duffie's method (Duffie, 1988) without perfect information about the market. When agents are out of equilibrium, they simultaneously compute their policy functions with predictive pricing functions and use adaptive learning schemes to learn the motion of the correct pricing function. Agents are able to learn the correct equilibrium pricing function with certain risk and learning parameters. In some other cases, the market price has excess volatility and the trading volume is very high. Simulations of the market behavior show rich dynamics, including a whole cascade from period doubling bifurcations to chaos. We apply the full families theory (De Melo and Van Strien, 1993) to prove that the rich dynamics do not come from numerical errors but are embedded in the structure of our dynamical system.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-3938
- Format
- Thesis
- Title
- Asset Pricing Equilibria for Heterogeneous, Limited-Information Agents.
- Creator
-
Jones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences,...
Show moreJones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
The standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe...
Show moreThe standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe simplifying assumptions. The reason for this difficulty is that heterogeneous agent models generate an endogenously complicated distribution of wealth across the agents. The state space for each agent's optimization problem includes the complex dynamics of the wealth distribution. There is no general way to characterize the interaction between the distribution of wealth and the macroeconomic aggregates. To address this issue, we implement an agent-based model where the agents have bounded rationality. In our model, we have a complete markets economy with two agents and two assets. The agents are heterogeneous and utility maximizing with constant coefficient of relative risk aversion [CRRA] preferences. How the agents address the stochastic behaviour of the evolution of the wealth distribution is central to our task since aggregate prices depend on this behaviour. An important component of this dissertation involves dealing with the computational difficulty of dynamic heterogeneous-agent models. That is, in order to predict prices, agents need a way to keep track of the evolution of the wealth distribution. We do this by allowing each agent to assume that a price-equivalent representative agent exists and that the representative agent has a constant coefficient of relative risk aversion. In so doing, the agents are able to formulate predictive pricing and demand functions which allow them to predict aggregate prices and make consumption and investment decisions each period. However, the agents' predictions are only approximately correct. Therefore, we introduce a learning mechanism to maintain the required level of accuracy in the agents' price predictions. From this setup, we find that the model, with learning, will converge over time to an approximate expectations equilibrium, provided that the the initial conditions are close enough to the rational expectations equilibrium prices. Two main contributions in our work are: 1) to formulate a new concept of approximate equilibria, and 2) to show how equilibria can be approximated numerically, despite the fact that the true state space at any point in time is mathematically complex. These contributions offer the possibility of characterizing a new class of asset pricing models where agents are heterogeneous and only just slightly limited in their rationality. That is, the partially informed agents in our model are able to forecast and utility-maximize only just as well as economists who face problems of estimating aggregate variables. By using an exogenously assigned adaptive learning rule, we analyse this implementation in a Lucas-type heterogeneous agent model. We focus on the sensitivity of the risk parameter and the convergence of the model to an approximate expectations equilibrium. Also, we study the extent to which adaptive learning is able to explain the empirical findings in an asset pricing model with heterogeneous agents.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9624
- Format
- Thesis
- Title
- Asset Pricing in a Lucas Framework with Boundedly Rational, Heterogeneous Agents.
- Creator
-
Culham, Andrew J. (Andrew James), Beaumont, Paul M., Kercheval, Alec N., Schlagenhauf, Don, Goncharov, Yevgeny, Kopriva, David, Department of Mathematics, Florida State University
- Abstract/Description
-
The standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agent-based computational models where artificial economies are populated with agents who trade in stylized asset...
Show moreThe standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agent-based computational models where artificial economies are populated with agents who trade in stylized asset markets. Although they offer a great deal of flexibility, the theoretical community has often criticized these agent-based models because the agents are too limited in their analytical abilities. In this work, we create an artificial market with a single risky asset and populate it with fully optimizing, forward looking, infinitely lived, heterogeneous agents. We restrict the state space of our agents by not allowing them to observe the aggregate distribution of wealth so they are required to compute their conditional demand functions while simultaneously learning the equations of motion for the aggregate state variables. We develop an efficient and flexible model code that can be used to explore a wide number of asset pricing questions while remaining consistent with conventional asset pricing theory. We validate our model and code against known analytical solutions as well as against a new analytical result for agents with differing discount rates. Our simulation results for general cases without known analytical solutions show that, in general, agents' asset holdings converge to a steady-state distribution and the agents are able to learn the equilibrium prices despite the restricted state space. Further work will be necessary to determine whether the exceptional cases have some fundamental theoretical explanation or can be attributed to numerical issues. We conjecture that convergence to the equilibrium is global and that the market-clearing price acts to guide the agents' forecasts toward that equilibrium.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-2948
- Format
- Thesis
- Title
- Asymptotic Behaviour of Convection in Porous Media.
- Creator
-
Parshad, Rana Durga, Wang, Xiaoming, Ye, Ming, Case, Bettye Anne, Ewald, Brian, N.Kercheval, Alec, Nolder, Craig, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation investigates asymptotic behaviour of convection in a fluid saturated porous medium. We analyse the Darcy-Boussinesq system under perturbation of the Darcy-Prandtl number parameter. In very tightly packed media this parameter is of very large order and can be driven to infinity to yield the infinite Darcy-Prandtl number model. We show convergence of global attractors and invariant measures of the Darcy-Boussinesq system to that of the infinite Darcy-Prandtl number model with...
Show moreThis dissertation investigates asymptotic behaviour of convection in a fluid saturated porous medium. We analyse the Darcy-Boussinesq system under perturbation of the Darcy-Prandtl number parameter. In very tightly packed media this parameter is of very large order and can be driven to infinity to yield the infinite Darcy-Prandtl number model. We show convergence of global attractors and invariant measures of the Darcy-Boussinesq system to that of the infinite Darcy-Prandtl number model with respect to perturbation of the Darcy-Prandtl number parameter.
Show less - Date Issued
- 2009
- Identifier
- FSU_migr_etd-2182
- Format
- Thesis
- Title
- An Asymptotically Preserving Method for Multiphase Flow.
- Creator
-
Jemison, Matthew, Sussman, Mark, Nof, Doron, Cogan, Nick, Gallivan, Kyle, Wang, Xiaoming, Department of Mathematics, Florida State University
- Abstract/Description
-
A unified, asymptotically-preserving method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows,...
Show moreA unified, asymptotically-preserving method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows, including stiff materials, which enables large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Shocks are captured and material discontinuities are tracked, without the aid of any approximate or exact Riemann solvers. The new method enables one to simulate the flow of multiple materials, each possessing a potentially exotic equation of state. Simulations of multiphase flow in one and two dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large ''impedance mismatch.'' Additionally, new techniques related to the Moment-of-Fluid interface reconstruction are presented, including a novel, asymptotically-preserving method for capturing ''filaments,'' and an improved method for initializing the Moment-of-Fluid optimization problem on unstructured, triangular grids.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-9012
- Format
- Thesis
- Title
- Biomedical Applications of Shape Descriptors.
- Creator
-
Celestino, Christian Edgar Laing, Sumners, De Witt, Greenbaum, Nancy, Mio, Washington, Hurdal, Monica, Department of Mathematics, Florida State University
- Abstract/Description
-
Given an edge-oriented polygonal graph in R3, we describe a method for computing the writhe as the average of weighted directional writhe numbers of the graph in a few directions. These directions are determined by the graph and the weights are determined by areas of path-connected open regions on the unit sphere. Within each open region, the directional writhe is constant. We developed formulas for the writhe of polygons on Bravais lattices and a few crystallographic groups, and discuss...
Show moreGiven an edge-oriented polygonal graph in R3, we describe a method for computing the writhe as the average of weighted directional writhe numbers of the graph in a few directions. These directions are determined by the graph and the weights are determined by areas of path-connected open regions on the unit sphere. Within each open region, the directional writhe is constant. We developed formulas for the writhe of polygons on Bravais lattices and a few crystallographic groups, and discuss applications to ring polymers. In addition, we obtained a closed formula for the writhe for graphs which extends the formula for the writhe of a polygon in R3, including the important special case of writhe of embedded open arcs. Additionally, we have developed shape descriptors based on a family of geometric measures for the purpose of classification and identification of shape differences for graphs. These shape descriptors involve combinations of writhe and average crossing numbers of curves, as well as total curvature, ropelength and thickness. We have applied these shape descriptors to RNA tertiary structures and families of sulcal curves from human brain surfaces. Preliminary results give an automatic method to distinguish RNA motifs. Clear differentiation among tRNA and/or ribozymes, and a distinction among mesophilic and thermophilic tRNA is shown. In addition, we notice a direct correlation between the length of an RNA backbone and its mean average crossing number which is described accurately by a power function. As a neuroscience application, human brain surfaces were extracted from MRI scans of human brains. In our preliminary results, an automatic differentiation between sulcal paths from the left or right hemispheres, an age differentiation and a male-female classification were achieved.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-3314
- Format
- Thesis
- Title
- Calibration of Local Volatility Models and Proper Orthogonal Decomposition Reduced Order Modeling for Stochastic Volatility Models.
- Creator
-
Geng, Jian, Navon, Ionel Michael, Case, Bettye Anne, Contreras, Rob, Okten, Giray, Kercheval, Alec N., Ewald, Brian, Department of Mathematics, Florida State University
- Abstract/Description
-
There are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters II-III), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no...
Show moreThere are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters II-III), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no interpolation or extrapolation of either the option prices or the volatility surface. We do not make any assumption regarding the shape of the volatility surface except to assume that it is smooth. Due to the smoothness assumption, we apply a second order Tikhonov regularization. We choose the Tikhonov regularization parameter as one of the singular values of the Jacobian matrix of the Dupire model. Finally we perform extensive numerical tests to assess and verify the aforementioned techniques for both local volatility models with known analytical solutions of European option prices and real market option data. In the second part of this thesis (chapters IV-V), stochastic volatility models, POD reduced order modeling are introduced first respectively. Then POD reduced order modeling is applied to the Heston stochastic volatility model for the pricing of European options. Finally, chapter VI summaries the thesis and points out future research areas.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7388
- Format
- Thesis
- Title
- Calibration of Multivariate Generalized Hyperbolic Distributions Using the EM Algorithm, with Applications in Risk Management, Portfolio Optimization and Portfolio Credit Risk.
- Creator
-
Hu, Wenbo, Kercheval, Alec, Huffer, Fred, Case, Bettye, Nichols, Warren, Nolder, Craig, Department of Mathematics, Florida State University
- Abstract/Description
-
The distributions of many financial quantities are well-known to have heavy tails, exhibit skewness, and have other non-Gaussian characteristics. In this dissertation we study an especially promising family: the multivariate generalized hyperbolic distributions (GH). This family includes and generalizes the familiar Gaussian and Student t distributions, and the so-called skewed t distributions, among many others. The primary obstacle to the applications of such distributions is the numerical...
Show moreThe distributions of many financial quantities are well-known to have heavy tails, exhibit skewness, and have other non-Gaussian characteristics. In this dissertation we study an especially promising family: the multivariate generalized hyperbolic distributions (GH). This family includes and generalizes the familiar Gaussian and Student t distributions, and the so-called skewed t distributions, among many others. The primary obstacle to the applications of such distributions is the numerical difficulty of calibrating the distributional parameters to the data. In this dissertation we describe a way to stably calibrate GH distributions for a wider range of parameters than has previously been reported. In particular, we develop a version of the EM algorithm for calibrating GH distributions. This is a modification of methods proposed in McNeil, Frey, and Embrechts (2005), and generalizes the algorithm of Protassov (2004). Our algorithm extends the stability of the calibration procedure to a wide range of parameters, now including parameter values that maximize log-likelihood for our real market data sets. This allows for the first time certain GH distributions to be used in modeling contexts when previously they have been numerically intractable. Our algorithm enables us to make new uses of GH distributions in three financial applications. First, we forecast univariate Value-at-Risk (VaR) for stock index returns, and we show in out-of-sample backtesting that the GH distributions outperform the Gaussian distribution. Second, we calculate an efficient frontier for equity portfolio optimization under the skewed-t distribution and using Expected Shortfall as the risk measure. Here, we show that the Gaussian efficient frontier is actually unreachable if returns are skewed t distributed. Third, we build an intensity-based model to price Basket Credit Default Swaps by calibrating the skewed t distribution directly, without the need to separately calibrate xi the skewed t copula. To our knowledge this is the first use of the skewed t distribution in portfolio optimization and in portfolio credit risk.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-3694
- Format
- Thesis
- Title
- Centroidal Voronoi Tessellations for Mesh Generation: from Uniform to Anisotropic Adaptive Triangulations.
- Creator
-
Nguyen, Hoa V., Gunzburger, Max D., El-Azab, Anter, Peterson, Janet, Wang, Xiaoming, Wang, Xiaoqiang, Department of Mathematics, Florida State University
- Abstract/Description
-
Mesh generation in regions in Euclidean space is a central task in computational science, especially for commonly used numerical methods for the solution of partial differential equations (PDEs), e.g., finite element and finite volume methods. Mesh generation can be classified into several categories depending on the element sizes (uniform or non-uniform) and shapes (isotropic or anisotropic). Uniform meshes have been well studied and still find application in a wide variety of problems....
Show moreMesh generation in regions in Euclidean space is a central task in computational science, especially for commonly used numerical methods for the solution of partial differential equations (PDEs), e.g., finite element and finite volume methods. Mesh generation can be classified into several categories depending on the element sizes (uniform or non-uniform) and shapes (isotropic or anisotropic). Uniform meshes have been well studied and still find application in a wide variety of problems. However, when solving certain types of partial differential equations for which the solution variations are large in some regions of the domain, non-uniform meshes result in more efficient calculations. If the solution changes more rapidly in one direction than in others, non-uniform anisotropic meshes are preferred. In this work, first we present an algorithm to construct uniform isotropic meshes and discuss several mesh quality measures. Secondly we construct an adaptive method which produces non-uniform anisotropic meshes that are well suited for numerically solving PDEs such as the convection diffusion equation. For the uniform Delaunay triangulation of planar regions, we focus on how one selects the positions of the vertices of the triangulation. We discuss a recently developed method, based on the centroidal Voronoi tessellation (CVT) concept, for effecting such triangulations and present two algorithms, including one new one, for CVT-based grid generation. We also compare several methods, including CVT-based methods, for triangulating planar domains. Furthermore, we define several quantitative measures of the quality of uniform grids. We then generate triangulations of several planar regions, including some having complexities that are representative of what one may encounter in practice. We subject the resulting grids to visual and quantitative comparisons and conclude that all the methods considered produce high-quality uniform isotropic grids and that the CVT-based grids are at least as good as any of the others. For more general grid generation settings, e.g., non-uniform and/or anistropic grids, such quantitative comparisons are much more difficult, if not impossible, to either make or interpret. This motivates us to develop CVT-based adaptive non-uniform anisotropic mesh refinement in the context of solving the convection-diffusion equation with emphasis on convection-dominated problems. The challenge in the numerical approximation of this equation is due to large variations in the solution over small regions of the physical domain. Our method not only refines the underlying grid at these regions but also stretches the elements according to the solution variation. Three main ingredients are incorporated to improve the accuracy of numerical solutions and increase the algorithm's robustness and efficiency. First, a streamline upwind Petrov Galerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor. Our algorithm has been tested on a variety of 2-dimensional examples. It is robust in detecting layers and efficient in resolving non-physical oscillations in the numerical approximation.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-2616
- Format
- Thesis
- Title
- Character Varieties of Knots and Links with Symmetries.
- Creator
-
Sparaco, Leona H., Petersen, Kathleen L., Harper, Kristine, Ballas, Sam, Bowers, Philip L., Hironaka, Eriko, Florida State University, College of Arts and Sciences, Department...
Show moreSparaco, Leona H., Petersen, Kathleen L., Harper, Kristine, Ballas, Sam, Bowers, Philip L., Hironaka, Eriko, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
: Let M be a hyperbolic manifold. The SL2(C) character variety of M is essentially the set of all representations ρ : π1(M) → SL2(C) up to trace equivalence. This algebraic set is connected to many geometric properties of the manifold M. We examine the effect of symmetries of M on its character variety. We compute the SL2(C) and PSL2(C) character varieties for an infinite family of two-bridge hyperbolic knots with symmetry. We explore the effect the symmetry has on the character variety and...
Show more: Let M be a hyperbolic manifold. The SL2(C) character variety of M is essentially the set of all representations ρ : π1(M) → SL2(C) up to trace equivalence. This algebraic set is connected to many geometric properties of the manifold M. We examine the effect of symmetries of M on its character variety. We compute the SL2(C) and PSL2(C) character varieties for an infinite family of two-bridge hyperbolic knots with symmetry. We explore the effect the symmetry has on the character variety and exploit this symmetry to factor the character variety. We then find the geometric genus of both components of the character variety. We compute the SL2(C) character variety for the Borromean ring complement in S^3. Further, we explore how the symmetries effect this character variety. Finally, we prove some general results about the structure of character varieties of links with symmetries.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Sparaco_fsu_0071E_13851
- Format
- Thesis
- Title
- Chern Classes of Sheaves of Logarithmic Vector Fields for Free Divisors.
- Creator
-
Liao, Xia, Aluffi, Paolo, Reina, Laura, Klassen, Eric P., Aldrovandi, Ettore, Petersen, Kathleen, Department of Mathematics, Florida State University
- Abstract/Description
-
The thesis work we present here focuses on solving a conjecture raised by Aluffi about Chern-Schwartz-MacPherson classes. Let $X$ be a nonsingular variety defined over an algebraically closed field $k$ of characteristic $0$, $D$ a reduced effective divisor on $X$, and $U = X smallsetminus D$ the open complement of $D$ in $X$. The conjecture states that $c_{textup{SM}}(1_U) = c(textup{Der}_X(-log D)) cap [X]$ in $A_{*}(X)$ for any locally quasi-homogeneous free divisor $D$. We prove a stronger...
Show moreThe thesis work we present here focuses on solving a conjecture raised by Aluffi about Chern-Schwartz-MacPherson classes. Let $X$ be a nonsingular variety defined over an algebraically closed field $k$ of characteristic $0$, $D$ a reduced effective divisor on $X$, and $U = X smallsetminus D$ the open complement of $D$ in $X$. The conjecture states that $c_{textup{SM}}(1_U) = c(textup{Der}_X(-log D)) cap [X]$ in $A_{*}(X)$ for any locally quasi-homogeneous free divisor $D$. We prove a stronger version of this conjecture. We also report on work aimed at studying the Grothedieck class of hypersurfaces of low degree. In this work, we verified the Geometric Chevalley-Warning conjecture in several low dimensional cases.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7467
- Format
- Thesis
- Title
- Chern-Schwartz-Macpherson Classes of Graph Hypersurfaces and Schubert Varieties.
- Creator
-
Stryker, Judson P., Aluffi, Paolo, Van Engelen, Robert, Aldrovandi, Ettore, Hironaka, Eriko, Van Hoeij, Mark, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation finds some partial results in support of two positivity conjectures regarding the Chern-Schwartz-MacPherson (CSM) classes of graph hypersurfaces (conjectured by Aluffi and Marcolli) and Schubert varieties (conjectured by Aluffi and Mihalcea). Direct calculations of some of these CSM classes are performed. Formulas for CSM classes of families of both graph hypersurfaces and coefficients of Schubert varieties are developed. Additionally, the positivity of the CSM class of...
Show moreThis dissertation finds some partial results in support of two positivity conjectures regarding the Chern-Schwartz-MacPherson (CSM) classes of graph hypersurfaces (conjectured by Aluffi and Marcolli) and Schubert varieties (conjectured by Aluffi and Mihalcea). Direct calculations of some of these CSM classes are performed. Formulas for CSM classes of families of both graph hypersurfaces and coefficients of Schubert varieties are developed. Additionally, the positivity of the CSM class of certain families of these varieties is proven. The first chapter starts with an overview and introduction to the material along with some of the background material needed to understand this dissertation. In the second chapter, a series of equivalences of graph hypersurfaces that are useful for reducing the number of cases that must be calculated are developed. A table of CSM classes of all but one graph with 6 or fewer edges are explicitly computed. This table also contains Fulton Chern classes and Milnor classes for the graph hypersurfaces. Using the equivalences and a series of formulas from a paper by Aluffi and Mihalcea, a new series of formulas for the CSM classes of certain families of graph hypersurfaces are deduced. I prove positivity for all graph hypersurfaces corresponding to graphs with first Betti number of 3 or less. Formulas for graphs equivalent to graphs with 6 or fewer edges are developed (as well as cones over graphs with 6 or fewer edges). In the third chapter, CSM classes of Schubert varieties are discussed. It is conjectured by Aluffi and Mihalcea that all Chern classes of Schubert varieties are represented by effective cycles. This is proven in special cases by B. Jones. I examine some positivity results by analyzing and applying combinatorial methods to a formula by Aluffi and Mihalcea. Positivity of what could be considered the ``typical' case for low codimensional coefficients is found. Some other general results for positivity of certain coefficients of Schubert varieties are found. This technique establishes positivity for some known cases very quickly, such as the codimension 1 case as described by Jones, as well as establishing positivity for codimension 2 and families of cases that were previously unknown. An unexpected connection between one family of cases and a second order PDE is also found. Positivity is shown for all cases of codimensions 1-4 and some higher codimensions are discussed. In both the graph hypersurfaces and Schubert varieties, all calculated Chern-Schwartz-MacPherson classes were found to be positive.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-1531
- Format
- Thesis
- Title
- Closed Form Solutions of Linear Difference Equations.
- Creator
-
Cha, Yongjae, Van Hoeij, Mark, Van Engelen, Robert A., Agashe, Amod, Aldrovandi, Ettore, Aluffi, Paolo, Department of Mathematics, Florida State University
- Abstract/Description
-
In this thesis we present an algorithm that finds closed form solutions for homogeneous linear recurrence equations. The key idea is transforming an input operator Linp to an operator Lg with known solutions. The main problem of this idea is how to find a solved equation Lg to which Linp can be reduced. To solve this problem, we use local data of a difference operator, that is invariant under the transformation.
- Date Issued
- 2011
- Identifier
- FSU_migr_etd-3960
- Format
- Thesis
- Title
- Combinatorial Type Problems for Triangulation Graphs.
- Creator
-
Wood, William E., Bowers, Philip, Hawkes, Lois, Bellenot, Steve, Klassen, Eric, Nolder, Craig, Quine, Jack, Department of Mathematics, Florida State University
- Abstract/Description
-
The main result in this thesis bounds the combinatorial modulus of a ring in a triangulation graph in terms of the modulus of a related ring. The bounds depend only on how the rings are related and not on the rings themselves. This may be used to solve the combinatorial type problem in a variety of situation, most significant in graphs with unbounded degree. Other results regarding the type problem are presented along with several examples illustrating the limits of the results.
- Date Issued
- 2006
- Identifier
- FSU_migr_etd-0794
- Format
- Thesis
- Title
- A Comparison Study of Principal Component Analysis and Nonlinear Principal Component Analysis.
- Creator
-
Wu, Rui, Magnan, Jerry F., Bellenot, Steven, Sussman, Mark, Department of Mathematics, Florida State University
- Abstract/Description
-
In the field of data analysis, it is important to reduce the dimensionality of data, because it will help to understand the data, extract new knowledge from the data, and decrease the computational cost. Principal Component Analysis (PCA) [1, 7, 19] has been applied in various areas as a method of dimensionality reduction. Nonlinear Principal Component Analysis (NLPCA) [1, 7, 19] was originally introduced as a nonlinear generalization of PCA. Both of the methods were tested on various...
Show moreIn the field of data analysis, it is important to reduce the dimensionality of data, because it will help to understand the data, extract new knowledge from the data, and decrease the computational cost. Principal Component Analysis (PCA) [1, 7, 19] has been applied in various areas as a method of dimensionality reduction. Nonlinear Principal Component Analysis (NLPCA) [1, 7, 19] was originally introduced as a nonlinear generalization of PCA. Both of the methods were tested on various artificial and natural datasets sampled from: "F(x) = sin(x) + x", the Lorenz Attractor, and sunspot data. The results from the experiments have been analyzed and compared. Generally speaking, NLPCA can explain more variance than a neural network PCA (NN PCA) in lower dimensions. However, as a result of increasing the dimension, the NLPCA approximation will eventually loss its advantage. Finally, we introduce a new combination of NN PCA and NLPCA, and analyze and compare its performance.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-0704
- Format
- Thesis
- Title
- Computational Aeroacoustics Cascade Model of Fan Noise.
- Creator
-
Lepoudre, Philip P., Tam, Christopher, Shih, Chiang, Gallivan, Kyle, Hussaini, Yousuff, Wang, Xiaoming, Department of Mathematics, Florida State University
- Abstract/Description
-
A Computational Aeroacoustics [CAA] cascade model has been built to study the generation and propagation mechanisms of noise resulting from the interaction of the fan and outlet guide vanes in a high-bypass ratio turbofan engine. Also called rotor-stator interaction noise, this noise source is a dominant contributor to the total tone and broadband noise levels produced by the engine, and therefore an improved understanding of the noise generation processes will assist in developing successful...
Show moreA Computational Aeroacoustics [CAA] cascade model has been built to study the generation and propagation mechanisms of noise resulting from the interaction of the fan and outlet guide vanes in a high-bypass ratio turbofan engine. Also called rotor-stator interaction noise, this noise source is a dominant contributor to the total tone and broadband noise levels produced by the engine, and therefore an improved understanding of the noise generation processes will assist in developing successful noise reduction strategies. The CAA cascade model directly solves the non-linear compressible Navier-Stokes equations on a two-dimensional linear cascade representation of the fan blade rows. The model incorporates real blade geometry and the rotor and stator blade rows are joined together with a sliding interface method. The fully-coupled aerodynamic flow and acoustic field are directly captured in one high resolution simulation, and therefore the noise production and propagation mechanisms can be visualized and measured in detail. The model includes the fully-coupled physics of the non-linear sound generation and propagation in swirling wake flow, as well as the transmission and reflection of sound through the blade rows. Previous models of rotor-stator interaction noise have typically involved some level of decoupling between the blade rows in order to simplify the noise problem. State-of-the-art CAA methodology is used to produce a high quality numerical solution with minimal dissipation and dispersion of supported waves. The multi-size-mesh multi-time-step Dispersion Relation Preserving [DRP] scheme is used for efficient computation of the wide range of length and time scales in the problem. A conformal mapping technique is used to generate body-fitted grids around the blade shapes, which are overset on a background grid to create the blade rows. An optimized interpolation scheme is employed for data transfer between the overset grids and also to create the sliding interface between the moving rotor-fixed grid and stationary stator-fixed grid. A completely new computer program was built for efficient implementation of the cascade model on parallel computers using Message Passing Interface [MPI], and the code was shown to have good parallel performance. The program is a general purpose solver for CAA calculations involving complex flow and geometry, and is a valuable resource for future research. A representative rotor-stator cascade with three rotor blades and five stator blades was constructed using real fan and outlet guide vane cross-sectional shapes from the NASA Glenn 22-in. model fan. A fully developed flow was obtained through the blade rows at the approach condition of the model fan. The performance of the sliding interface method was analyzed by comparing the solution on the rotor-fixed and stator-fixed grids at the coincident sliding interface mesh line, and the error in grid transfer interpolation was found to be comparable to the low error levels of the underlying DRP scheme. The simulation was used to produce animations of pressure and Mach contour, which provided a wealth of visual information about the flow field and noise generation and propagation behaviour in the cascade. The ability of the CAA cascade model to produce a high fidelity picture of the interaction noise has been demonstrated. In addition, the velocity and pressure fields were measured at various axial locations in the domain to quantify the mean and fluctuating components of the swirling wake flow between the blade rows and after the stator. The tone noise results were compared with interaction tone linear theory. The theory predicted the existence of a small number of propagating spinning wave modes at harmonics of rotor blade passing frequency [BPF]. In particular, the dominant interaction tone at BPF, labelled , was predicted to have two wave fronts in the circumferential domain period and to spin counter to the direction of the rotor. This interaction tone was clearly visible in animations of the pressure contour as an intense shock wave moving at an oblique spiral angle between the blade rows and after the stator. The wave shape was measured using a moving average, and the high amplitude waveform showed characteristic non-linear steepening, which calls into question the common assumption that the interaction tones can be adequately represented by single linear wave modes. The spinning modes in the solution were measured at various axial locations using a joint temporal-spatial modal decomposition of the fluctuating pressure field, and very good agreement was observed with the modal content predicted by linear theory. The relationship of the mode spiral angle to blade stagger angle and the phase velocity of the spinning modes were shown to govern the transmission and reflection behaviour of the modes through the blade rows. The mode was reflected and frequency shifted by the rotor, and the reflected mode propagated through the stator blade row to the outlet. Only co-rotating modes were able to propagate through the rotor to the inlet, and hence the sound levels in the inlet were significantly lower than in the outlet. This behaviour is in good agreement with the trends observed in experimental studies of fan noise. The unsteady flow and surface pressure fluctuations around a stator blade were also measured. Spectral analysis of the surface pressure fluctuations revealed the highest sound pressure levels occurred near the blade leading edge and on the upper blade surface near the trailing edge. The sound source mechanisms on the stator blade are related to the fluctuating loading on the blade as it cuts through the rotor wake profile and experiences significant variation in the local angle of attack.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-3115
- Format
- Thesis
- Title
- A Computational Study of Ion Conductance in the KcsA K⁺ Channel Using a Nernst-Planck Model with Explicit Resident Ions.
- Creator
-
Jung, Yong Woon, Mascagni, Michael A., Huffer, Fred, Bowers, Philip, Klassen, Eric, Cogan, Nick, Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation, we describe the biophysical mechanisms underlying the relationship between the structure and function of the KcsA K+ channel. Because of the conciseness of electro-diffusion theory and the computational advantages of a continuum approach, Nernst-Planck (NP) type models such as the Goldman-Hodgkin-Katz (GHK) and Poisson-Nernst-Planck (PNP) models have been used to describe currents in ion channels. However, the standard PNP (SPNP) model is known to be inapplicable to...
Show moreIn this dissertation, we describe the biophysical mechanisms underlying the relationship between the structure and function of the KcsA K+ channel. Because of the conciseness of electro-diffusion theory and the computational advantages of a continuum approach, Nernst-Planck (NP) type models such as the Goldman-Hodgkin-Katz (GHK) and Poisson-Nernst-Planck (PNP) models have been used to describe currents in ion channels. However, the standard PNP (SPNP) model is known to be inapplicable to narrow ion channels because it cannot handle discrete ion properties. To overcome this weakness, we formulated the explicit resident ions Nernst-Planck (ERINP) model, which applies a local explicit model where the continuum model fails. Then we tested the effects of the ERI Coulomb potential, the ERI induced potential, and the ERI dielectric constant for ion conductance were tested in the ERINP model. Using the current-voltage (I-V ) and current-concentration (I-C) relationships determined from the ERINP model, we discovered biologically significant information that is unobtainable from the traditional continuum model. The mathematical analysis of the K+ ion dynamics revealed a tight structure-function system with a shallow well, a deep well, and two K+ ions resident in the selectivity filter. We also demonstrated that the ERINP model not only reproduced the experimental results with a realistic set of parameters, it also reduced CPU costs.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-3741
- Format
- Thesis
- Title
- Conformal Tilings and Type.
- Creator
-
Mayhook, Dane, Bowers, Philip L., Riley, Mark A., Heil, Wolfgang H., Klassen, E. (Eric), Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
This paper examines a class of geometric tilings known as conformal tilings, first introduced by Bowers and Stephenson in a 1997 paper, and later developed in a series of papers by the same authors. These tilings carry a prescribed conformal structure in that the tiles are all conformally regular, and admit a reflective structure. Conformal tilings are essentially uniquely determined by their combinatorial structure, which we encode as a planar polygonal complex. It is natural to consider not...
Show moreThis paper examines a class of geometric tilings known as conformal tilings, first introduced by Bowers and Stephenson in a 1997 paper, and later developed in a series of papers by the same authors. These tilings carry a prescribed conformal structure in that the tiles are all conformally regular, and admit a reflective structure. Conformal tilings are essentially uniquely determined by their combinatorial structure, which we encode as a planar polygonal complex. It is natural to consider not just a single planar polygonal complex, but its entire local isomorphism class. We present a case study on the local isomorphism class of the discrete hyperbolic plane complex, ultimately providing a constructive description of each of its uncountably many members. Conformal tilings may tile either the complex plane or the Poincaré disk, and answering the type problem motivates the remainder of the paper. Subdivision operators are used to repeatedly subdivide and amalgamate tilings, and Bowers and Stephenson prove that when a conformal tiling admits a combinatorial hierarchy manifested by an expansive, conformal subdivision operator, then that tiling is parabolic and tiles the plane. We introduce a new notion of hierarchy---a fractal hierarchy---and generalize their result in some cases by showing that conformal tilings which admit a combinatorial hierarchy manifested by an expansive, fractal subdivision operator are also parabolic and tile the plane, assuming that two generic conditions for conformal tilings are true. This then answers the problem for certain expansion complexes, showing that expansion complexes for appropriate rotationally symmetric subdivision operators are necessarily parabolic.
Show less - Date Issued
- 2016
- Identifier
- FSU_2016SU_Mayhook_fsu_0071E_13406
- Format
- Thesis
- Title
- Constant Proportions Portfolio Strategies in an Evolutionary Context under a Dividend Factor Model.
- Creator
-
Mavroudis, Konstantinos, Nolder, Craig, Schlagenhauf, Don, Beaumont, Paul, Case, Bettye Anne, Kercheval, Alec, Sumners, De Witt, Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation we explore the impact of various constant-proportions investment strategies in an economic evolutionary market. Dividends are generated according to a new Dividend Factor Model. Furthermore, Dividends were estimated and calibrated from data using Principal Component Analysis and Factor Analysis. Moreover, we perform simulations to study the long-run outcome of an evolutionary competition with several well diversified constant-proportions strategies, among them some...
Show moreIn this dissertation we explore the impact of various constant-proportions investment strategies in an economic evolutionary market. Dividends are generated according to a new Dividend Factor Model. Furthermore, Dividends were estimated and calibrated from data using Principal Component Analysis and Factor Analysis. Moreover, we perform simulations to study the long-run outcome of an evolutionary competition with several well diversified constant-proportions strategies, among them some innovative strategies. We present and compare a variety of simulations with dividends being artificially generated according to the many different versions of our model. Our simulation results are important for both theoretical and practical reasons. In theoretical terms we have a model where, although the true rational strategy is the only probable dominant strategy, it is also possible for some "behavioral" rules to perform better under specific circumstances. In practical terms we suggest new constant-proportions strategies that could be superior for investors at least in the short run.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-2654
- Format
- Thesis
- Title
- Constructing Non-Trivial Elements of the Shafarevich-Tate Group of an Abelian Variety.
- Creator
-
Biswas, Saikat, Agashe, Amod, Aggarwal, Sudhir, Hironaka, Eriko, Van Hoeij, Mark, Aldrovandi, Ettore, Department of Mathematics, Florida State University
- Abstract/Description
-
The Shafarevich-Tate group of an elliptic curve is an important invariant of the curve whose conjectural finiteness can sometimes be used to determine the rank of the curve. The second part of the Birch and Swinnerton-Dyer (BSD) conjecture gives a conjectural formula for the order of the Shafarevich-Tate group of a elliptic curve in terms of other computable invariants of the curve. Cremona and Mazur initiated a theory that can often be used to verify the BSD conjecture by constructing non...
Show moreThe Shafarevich-Tate group of an elliptic curve is an important invariant of the curve whose conjectural finiteness can sometimes be used to determine the rank of the curve. The second part of the Birch and Swinnerton-Dyer (BSD) conjecture gives a conjectural formula for the order of the Shafarevich-Tate group of a elliptic curve in terms of other computable invariants of the curve. Cremona and Mazur initiated a theory that can often be used to verify the BSD conjecture by constructing non-trivial elements of the Shafarevich-Tate group of an elliptic curve by means of the Mordell-Weil group of an ambient curve. In this thesis, we extract a general theorem out of Cremona and Mazur's work and give precise conditions under which such a construction can be made. We then give an extension of our result which provides new theoretical evidence for the BSD conjecture. Finally, we prove a theorem that gives an alternative method to potentially construct non-trivial elements of the Shafarevich-Tate group of an elliptic curve by using the component groups of a second curve.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-3717
- Format
- Thesis
- Title
- Deterministic and Stochastic Aspects of Data Assimilation.
- Creator
-
Akella, Santharam, Navon, Ionel Michael, O'Brien, James J., Erlebacher, Gordon, Wang, Qi, Sussman, Mark, Department of Mathematics, Florida State University
- Abstract/Description
-
The principles of optimal control of distributed parameter systems are used to derive a powerful class of numerical methods for solutions of inverse problems, called data assimilation (DA) methods. Using these DA methods one can efficiently estimate the state of a system and its evolution. This information is very crucial for achieving more accurate long term forecasts of complex systems, for instance, the atmosphere. DA methods achieve their goal of optimal estimation via combination of all...
Show moreThe principles of optimal control of distributed parameter systems are used to derive a powerful class of numerical methods for solutions of inverse problems, called data assimilation (DA) methods. Using these DA methods one can efficiently estimate the state of a system and its evolution. This information is very crucial for achieving more accurate long term forecasts of complex systems, for instance, the atmosphere. DA methods achieve their goal of optimal estimation via combination of all available information in the form of measurements of the state of the system and a dynamical model which describes the evolution of the system. In this dissertation work, we study the impact of new nonlinear numerical models on DA. High resolution advection schemes have been developed and studied to model propagation of flows involving sharp fronts and shocks. The impact of high resolution advection schemes in the framework of inverse problem solution/ DA has been studied only in the context of linear models. A detailed study of the impact of various slope limiters and the piecewise parabolic method (PPM) on DA is the subject of this work. In 1-D we use a nonlinear viscous Burgers equation and in 2-D a global nonlinear shallow water model has been used. The results obtained show that using the various advection schemes consistently improves variational data assimilation (VDA) in the strong constraint form, which does not include model error. However, the cost functional included efficient and physically meaningful construction of the background cost functional term, J_b, using balance and diffusion equation based correlation operators. This was then followed by an in-depth study of various approaches to model the systematic component of model error in the framework of a weak constraint VDA. Three simple forms, decreasing, invariant, and exponentially increasing in time forms of evolution of model error were tested. The inclusion of model error provides a substantial reduction in forecasting errors, in particular the exponentially increasing form in conjunction with the piecewise parabolic high resolution advection scheme was found to provide the best results. Results obtained in this work can be used to formulate sophisticated forms of model errors, and could lead to implementation of new VDA methods using numerical weather prediction models which involve high resolution advection schemes such as the van Leer slope limiters and the PPM.
Show less - Date Issued
- 2006
- Identifier
- FSU_migr_etd-0145
- Format
- Thesis
- Title
- Developing SRSF Shape Analysis Techniques for Applications in Neuroscience and Genomics.
- Creator
-
Wesolowski, Sergiusz, Wu, Wei, Bertram, R. (Richard), Srivastava, Anuj, Beerli, Peter, Mio, Washington, Florida State University, College of Arts and Sciences, Department of...
Show moreWesolowski, Sergiusz, Wu, Wei, Bertram, R. (Richard), Srivastava, Anuj, Beerli, Peter, Mio, Washington, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Dissertation focuses on exploring the capabilities of the SRSF statistical shape analysis framework through various applications. Each application gives rise to a specific mathematical shape analysis model. The theoretical investigation of the models, driven by real data problems, give rise to new tools and theorems necessary to conduct a sound inference in the space of shapes. From theoretical standpoint the robustness results are provided for the model parameters estimation and an ANOVA...
Show moreDissertation focuses on exploring the capabilities of the SRSF statistical shape analysis framework through various applications. Each application gives rise to a specific mathematical shape analysis model. The theoretical investigation of the models, driven by real data problems, give rise to new tools and theorems necessary to conduct a sound inference in the space of shapes. From theoretical standpoint the robustness results are provided for the model parameters estimation and an ANOVA-like statistical testing procedure is discussed. The projects were a result of the collaboration between theoretical and application-focused research groups: the Shape Analysis Group at the Department of Statistics at Florida State University, the Center of Genomics and Personalized Medicine at FSU and the FSU's Department of Neuroscience. As a consequence each of the projects consists of two aspects—the theoretical investigation of the mathematical model and the application driven by a real life problem. The applications components, are similar from the data modeling standpoint. In each case the problem is set in an infinite dimensional space, elements of which are experimental data points that can be viewed as shapes. The three projects are: ``A new framework for Euclidean summary statistics in the neural spike train space''. The project provides a statistical framework for analyzing the spike train data and a new noise removal procedure for neural spike trains. The framework adapts the SRSF elastic metric in the space of point patterns to provides a new notion of the distance. ``SRSF shape analysis for sequencing data reveal new differentiating patterns''. This project uses the shape interpretation of the Next Generation Sequencing data to provide a new point of view of the exon level gene activity. The novel approach reveals a new differential gene behavior, that can't be captured by the state-of-the art techniques. Code is available online on github repository. ``How changes in shape of nucleosomal DNA near TSS influence changes of gene expression''. The result of this work is the novel shape analysis model explaining the relation between the change of the DNA arrangement on nucleosomes and the change in the differential gene expression.
Show less - Date Issued
- 2017
- Identifier
- FSU_FALL2017_Wesolowski_fsu_0071E_14177
- Format
- Thesis
- Title
- Diffuse Interface Method for Two-Phase Incompressible Flows.
- Creator
-
Han, Daozhi, Wang, Xiaoming, Höflich, Peter, Gallivan, Kyle A., Kopriva, David A., Oberlin, Daniel M., Sussman, Mark, Florida State University, College of Arts and Sciences,...
Show moreHan, Daozhi, Wang, Xiaoming, Höflich, Peter, Gallivan, Kyle A., Kopriva, David A., Oberlin, Daniel M., Sussman, Mark, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
In this contribution, we focus on the study of multiphase flow using the phase field approach. Multiphase flow phenomena are ubiquitous. Common examples include coupled atmosphere and ocean system (air and water), oil reservoir (water, oil and gas), cloud and fog (water vapor, water and air). Multiphase flows also play an important role in many engineering and environmental science applications. For two fluids with matched density, the Cahn-Hilliard-Navier-Stokes system (CHNS) is a well...
Show moreIn this contribution, we focus on the study of multiphase flow using the phase field approach. Multiphase flow phenomena are ubiquitous. Common examples include coupled atmosphere and ocean system (air and water), oil reservoir (water, oil and gas), cloud and fog (water vapor, water and air). Multiphase flows also play an important role in many engineering and environmental science applications. For two fluids with matched density, the Cahn-Hilliard-Navier-Stokes system (CHNS) is a well accepted phase field model. We propose a novel second order in time numerical scheme for solving the CHNS system. The scheme is based on a second order convex-splitting for the Cahn-Hilliard equation and pressure-projection for the Navier-Stokes equation. We show that the scheme is mass-conservative, satisfies a modified energy law and is therefore unconditionally stable. Moreover, we prove that the scheme is unconditionally uniquely solvable at each time step by exploring the monotonicity associated with the scheme. Thanks to the simple coupling of the scheme, we design an efficient Picard iteration procedure to further decouple the computation of Cahn-Hilliard equation and Navier-Stokes equation. We implement the scheme by the mixed finite element method. Ample numerical experiments are performed to validate the accuracy and efficiency of the numerical scheme. In addition, we propose a novel decoupled unconditionally stable numerical scheme for the simulation of two-phase flow in a Hele-Shaw cell which is governed by the Cahn-Hilliard-Hele-Shaw system (CHHS). The temporal discretization of the Cahn-Hilliard equation is based on a convex-splitting of the associated energy functional. Moreover, the capillary forcing term in the Darcy equation is separated from the pressure gradient at the time discrete level by using an operator-splitting strategy. Thus the computation of the nonlinear Cahn-Hilliard equation is completely decoupled from the update of pressure. Finally, a pressure-stabilization technique is used in the update of pressure so that at each time step one only needs to solve a Poisson equation with constant coefficient. We show that the scheme is unconditionally stable. Numerical results are presented to demonstrate the accuracy and efficiency of our scheme. The CHNS system and CHHS system are two widely used phase field models for two-phase flow in a single domain (either conduit or Hele-Shaw cell/porous media). There are applications such as flows in unconfined karst aquifers, karst oil reservoir, proton membrane exchange fuel cell, where multiphase flows in conduits and in porous media must be considered together. Geometric configurations that contain both conduit (or vug) and porous media are termed karstic geometry. We present a family of phase field (diffusive interface) models for two phase flow in karstic geometry. These models, the so-called Cahn-Hilliard-Stokes-Darcy system, together with the associated interface boundary conditions are derived by utilizing Onsager's extremum principle. The models derived enjoy physically important energy laws and are consistent with thermodynamics. For the analysis of the Cahn-Hilliard-Stokes-Darcy system, we show that there exists at least a global in time finite energy solution by the compactness argument. A weak-strong uniqueness result is also established, which says that the strong solution, if exists, is unique in the class of weak solutions. Finally, we propose and analyze two unconditionally stable numerical algorithms of first order and second order respectively, for solving the CHSD system. A decoupled numerical procedure for practical implementation of the schemes are also presented. The decoupling is realized through explicit discretization of the velocity in the Cahn-Hilliard equation and extrapolation in time of the interface boundary conditions. At each time step, one only needs to solve a Cahn-Hilliard type equation in the whole domain, a Darcy equation in porous medium, and a Stokes equation in conduit in a separate and sequential fashion. Two numerical experiments, boundary driven and buoyancy driven flows, are performed to illustrate the effectiveness of our scheme. Both numerical simulations are of physical interest for transport processes of two-phase flow in karst geometry.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9609
- Format
- Thesis
- Title
- Dirac Operators, Multipliers and H[superscript P] Spaces of Monogenic Functions.
- Creator
-
Wang, Guanghou, Nolder, Craig, Hawkes, Lois, Case, Bettye, Hironaka, Eriko, Quine, Jack, Seppälä, Mika, Department of Mathematics, Florida State University
- Abstract/Description
-
We have done a few things under Clifford algebra settings. Firstly, one Caccioppoli type estimate is derived for solutions of $A$-Dirac equations in the form $DA(x,Du) = 0$, where $D$ is the Dirac operator. This kind of $A$-Dirac equations are generalizations of elliptic equations of $A$-harmonic type, i.e. div$A(x,\nabla u)=0.$ Secondly, the multiplier theory from Fourier analysis is generalized to Clifford analysis. After the multipliers of operators $\mathcal{D}$, $T$ and $ \Pi$ are...
Show moreWe have done a few things under Clifford algebra settings. Firstly, one Caccioppoli type estimate is derived for solutions of $A$-Dirac equations in the form $DA(x,Du) = 0$, where $D$ is the Dirac operator. This kind of $A$-Dirac equations are generalizations of elliptic equations of $A$-harmonic type, i.e. div$A(x,\nabla u)=0.$ Secondly, the multiplier theory from Fourier analysis is generalized to Clifford analysis. After the multipliers of operators $\mathcal{D}$, $T$ and $ \Pi$ are identified, some related properties will be very easy to achieve, including two integral representation theorems, also the iterations of operators $\mathcal{D}$ and $\Delta$ are also discussed. Thirdly, one Carleson measure theorem is achieved for monogenic Hardy spaces on the unit ball in $R^{n+1}$, as well as one Clifford Riesz representation theorem. Furthermore, one bounded theorem about certain inhomogeneous Dirac equations is established with the help of spherical monogenic functions theory.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5259
- Format
- Thesis
- Title
- Dirichlet's Theorem and Analytic Number Theory.
- Creator
-
Frey, Thomas W., Department of Mathematics
- Abstract/Description
-
In 1837 Dirichlet proved the infinitude of primes in all arithmetic co-prime sequences. This was done by look at Dirichlet L-functions, Dirichlet series, Dirichlet characters (modulo k), and Euler Products. In this thesis, the necessary facts, theorems, and properties are shown in order to prove Dirichlet's Theorem, concluding with a proof of Dirichlet's Theorem.
- Date Issued
- 2015
- Identifier
- FSU_migr_uhm-0560
- Format
- Thesis
- Title
- Discontinuous Galerkin Spectral Element Approximations for the Reflection and Transmission of Waves from Moving Material Interfaces.
- Creator
-
Winters, Andrew R., Kopriva, David, Piekarewicz, Jorge, Hussaini, M. Yousuff, Gallivan, Kyle, Cogan, Nick, Case, Bettye Anne, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation develops and evaluates a computationally efficient and high-order numerical method to compute wave reflection and transmission from moving material boundaries. We use a discontinuous Galerkin spectral element approximation with an arbitrary Lagrangian-Eulerian mapping and derive the exact upwind numerical fluxes to model the physics of wave reflection and transmission at jumps in material properties. Spectral accuracy is obtained by placing moving material interfaces at...
Show moreThis dissertation develops and evaluates a computationally efficient and high-order numerical method to compute wave reflection and transmission from moving material boundaries. We use a discontinuous Galerkin spectral element approximation with an arbitrary Lagrangian-Eulerian mapping and derive the exact upwind numerical fluxes to model the physics of wave reflection and transmission at jumps in material properties. Spectral accuracy is obtained by placing moving material interfaces at element boundaries and solving the appropriate Riemann problem. We also derive and evaluate an explicit local time stepping (LTS) integration for the DGSEM on moving meshes. The LTS procedure is derived from Adams-Bashforth multirate time integration methods. We present speedup and memory estimates, which show that the explicit LTS integration scales well with problem size. The LTS time integrator is also highly parallelizable. The manuscript also gathers, derives and analyzes several analytical solutions for the problem of wave reflection and transmission from a plane moving material interface. We present time-step refinement studies and numerical examples to show the approximations for wave reflection and transmission at dielectric and acoustic interfaces are spectrally accurate in space and have design temporal accuracy. The numerical tests also validate theoretical estimates that the LTS procedure can reduce computational cost by as much as an order of magnitude for time accurate problems. Finally, we investigate the parallel speedup of the LTS integrator and compare it to a standard, low-storage Runge-Kutta method.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-8916
- Format
- Thesis
- Title
- Discontinuous Galerkin Spectral Element Approximations on Moving Meshes for Wave Scattering from Reflective Moving Boundaries.
- Creator
-
Acosta-Minoli, Cesar Augusto, Kopriva, David, Srivastava, Anuj, Hussaini, M. Yousuff, Sussman, Mark, Ewald, Brian, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation develops and evaluates a high order method to compute wave scattering from moving boundaries. Specifically, we derive and evaluate a Discontinuous Galerkin Spectral elements method (DGSEM) with Arbitrary Lagrangian- Eulerian (ALE) mapping to compute conservation laws on moving meshes and numerical boundary conditions for Maxwell's equations, the linear Euler equations and the nonlinear Euler gas-dynamics equations to calculate the numerical flux on reflective moving...
Show moreThis dissertation develops and evaluates a high order method to compute wave scattering from moving boundaries. Specifically, we derive and evaluate a Discontinuous Galerkin Spectral elements method (DGSEM) with Arbitrary Lagrangian- Eulerian (ALE) mapping to compute conservation laws on moving meshes and numerical boundary conditions for Maxwell's equations, the linear Euler equations and the nonlinear Euler gas-dynamics equations to calculate the numerical flux on reflective moving boundaries. We use one of a family of explicit time integrators such as Adams-Bashforth or low storage explicit Runge-Kutta. The approximations preserve the discrete metric identities and the Discrete Geometric Conservation Law (DGCL) by construction. We present time-step refinement studies with moving meshes to validate the moving mesh approximations. The test problems include propagation of an electromagnetic gaussian plane wave, a cylindrical pressure wave propagating in a subsonic flow, and a vortex convecting in a uniform inviscid subsonic flow. Each problem is computed on a time-deforming mesh with three methods used to calculate the mesh velocities: From exact differentiation, from the integration of an acceleration equation, and from numerical differentiation of the mesh position. In addition, we also present four numerical examples using Maxwell's equations, one example using the linear Euler equations and one more example using nonlinear Euler equations to validate these approximations. These are: reflection of light from a constantly moving mirror, reflection of light from a constantly moving cylinder, reflection of light from a vibrating mirror, reflection of sound in linear acoustics and dipole sound generation by an oscillating cylinder in an inviscid flow.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-0111
- Format
- Thesis
- Title
- Discrete Frenet Frame with Application to Structural Biology and Kinematics.
- Creator
-
Lu, Yuanting, Quine, John R., Huffer, Fred W., Bertram, Richard, Cross, Timothy A., Cogan, Nick, Department of Mathematics, Florida State University
- Abstract/Description
-
The classical Frenet frame is a moving frame on a smooth curve. Connecting a sequence of points in space by line segments makes a discrete curve. The reference frame consisting of tangent, normal and binormal vectors at each point is defined as discrete Frenet frame (DFF). The DFF is useful in studying shapes of long molecules such as proteins. In this dissertation, we provide a solid mathematics foundation for DFF by showing the limit of the Frenet formula for DFF is the classical Frenet...
Show moreThe classical Frenet frame is a moving frame on a smooth curve. Connecting a sequence of points in space by line segments makes a discrete curve. The reference frame consisting of tangent, normal and binormal vectors at each point is defined as discrete Frenet frame (DFF). The DFF is useful in studying shapes of long molecules such as proteins. In this dissertation, we provide a solid mathematics foundation for DFF by showing the limit of the Frenet formula for DFF is the classical Frenet formula. As part of a survey of various ways to compute rigid body motion, we show the Denavit-Hartenberg (D-H) conventions in robotics are a special case of the DFFs. Finally, we apply DFF to solve the kink angle problem in protein alpha helical structure using data from NMR experiments.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7477
- Format
- Thesis
- Title
- DNA Knotting: Occurrences, Consequences & Resolution.
- Creator
-
Mann, Jennifer Katherine, Sumners, De Witt L., Zechiedrich, E. Lynn, Greenbaum, Nancy L., Heil, Wolfgang, Quine, Jack, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation applies knot theory, DNA topology, linear algebra, statistics, probability theory and statistical mechanics to address questions about knotted, double-stranded DNA. The three main investigations are the cellular effects of knotting, the biophysics of knotting/unknotting and the unknotting mechanism of human topoisomerase IIá. The cellular effects of knotting were done in collaboration with Rick Deibler. The statistical mechanics were done in collaboration with Zhirong Liu...
Show moreThis dissertation applies knot theory, DNA topology, linear algebra, statistics, probability theory and statistical mechanics to address questions about knotted, double-stranded DNA. The three main investigations are the cellular effects of knotting, the biophysics of knotting/unknotting and the unknotting mechanism of human topoisomerase IIá. The cellular effects of knotting were done in collaboration with Rick Deibler. The statistical mechanics were done in collaboration with Zhirong Liu and Hue Sun Chan. Cellular DNA knotting is driven by DNA compaction, topoisomerization, replication, supercoiling-promoted strand collision, and DNA self-interactions resulting from transposition, site-specific recombination, and transcription (Spengler, Stasiak, and Cozzarelli 1985; Heichman, Moskowitz, and Johnson 1991; Wasserman and Cozzarelli 1991; Sogo, Stasiak, Martinez-Robles et al. 1999). Type II topoisomerases are ubiquitous, essential enzymes that interconvert DNA topoisomers to resolve knots. These enzymes pass one DNA helix through another by creating an enzyme-bridged transient break. Explicitly how type II topoisomerases recognize their substrate and decide where to unknot DNA is unknown. What are the biological consequences of unresolved cellular DNA knotting? We investigated the physiological consequences of the well-accepted propensity of cellular DNA to collide and react with itself by analyzing the effects of plasmid recombination and knotting in E. coli using a site-specific recombination system. Fluctuation assays were performed to determine mutation rates of the strains used in these experiments (Rosche and Foster 2000). Our results show that DNA knotting: (i) promotes replicon loss by blocking DNA replication, (ii) blocks gene transcription, (iii) increases antibiotic sensitivity and (iv) promotes genetic rearrangements at a rate which is four orders of magnitude greater than of an unknotted plasmid. If unresolved, DNA knots can be lethal and may help drive genetic evolution. The faster and more efficiently type II topoisomerase unknots, the less chance for these disastrous consequences. How do type II topoisomerases unknot, rather than knot? If type II topoisomerases act randomly on juxtapositions of two DNA helices, knots are produced with probability depending on the length of the circular DNA substrate. For example, random strand passage is equivalent to random cyclization of linear substrate, and random cyclization of 10.5 kb substrate produces about 3% DNA knots, mostly trefoils (Rybenkov, Cozzarelli, and Vologodskii 1993; Shaw and Wang 1993). However, experimental data show that type II topoisomerases unknot at a level up to 90-fold the level achieved by steady-state random DNA strand passage (Rybenkov, Ullsperger, and Vologodskii et al. 1997). Various models have been suggested to explain these results and all of them assume that the enzyme directs the process. In contrast, our laboratory proposed (Buck and Zechiedrich 2004) that type II topoisomerases recognize the curvature of the two DNA helices within a juxtaposition and the resulting angle between the helices. Furthermore, the values of curvature and angle lie within their respective bounds, which are characteristic of DNA knots. Thus, our model uniquely proposes unknotting is directed by the DNA and not the protein. We used statistical mechanics to test this hypothesis. Using a lattice polymer model, we generated conformations from pre-existing juxtaposition geometries and studied the resulting knot types. First we determined the statistical relationship between the local geometry of a juxtaposition of two chain segments and whether the loop is knotted globally. We calculated the HOMFLY (Freyd, Yetter, and Hoste et al. 1985) polynomial of each conformation to identify knot types. We found that hooked juxtapositions are far more likely to generate knots than free juxtapositions. Next we studied the transitions between initial and final knot/unknot states that resulted from a type II topoisomerase-like segment passage at the juxtaposition. Selective segment passages at free juxtapositions tended to increase knot probability. In contrast, segment passages at hooked juxtapositions caused more transitions from knotted to unknot states than vice versa, resulting in a steady-state knot probability much less than that at topological equilibrium. In agreement with experimental type II topoisomerase results, the tendency of a segment passage at a given juxtaposition to unknot is strongly correlated with the tendency of that segment passage to decatenate. These quantitative findings show that there exists discriminatory topological information in local juxtaposition geometries that could be utilized by the enzyme to unknot rather than knot. This contrasts with prior thought that the enzyme itself directs unknotting and strengthens the hypothesis proposed by our group that type II topoisomerases act on hooked rather than free juxtapositions. Will a type II topoisomerase resolve a DNA twist knot in one cycle of action? The group of knots known as twist knots is intriguing from both knot theoretical and biochemical perspectives. A twist knot consists of an interwound region with any number of crossings and a clasp with two crossings. By reversing one of the crossings in the clasp the twist knot is converted to the unknot. However, a crossing change in the interwound region produces a twist knot with two less nodes. Naturally occurring knots in cells are twist knots. The unknotting number, the minimal number of crossing reversals required to convert a knot to the unknot, is equal to one for any twist knot. Each crossing reversal performed by a type II topoisomerase requires energy. Within the cell, DNA knots might be pulled tight by forces such as those which accompany transcription, replication and segregation, thus increasing the likelihood of DNA damage. Therefore, it would be advantageous for type II topoisomerases to act on a crossing in the clasp region of a DNA twist knot, thus, resolving the DNA knot in a single step. The mathematical unknotting number corresponds to the smallest number of topoisomerase strand passage events needed to untie a DNA knot. In order to study unknotting of DNA knots by a type II topoisomerase, I used site-specific recombination systems and a bench-top fermentor to isolate large quantities of knotted DNA. My data show that purified five- and seven-noded twist knots are converted to the unknot by human topoisomerase IIá with no appearance of either trefoils or five-noded twist knots which are possible intermediates if the enzyme acted on one of the interwound nodes. Consequently, these data suggest that type II topoisomerase may preferentially act upon the clasp region of a twist knot. We have uniquely combined biology, chemistry, physics and mathematics to gain insight into the mechanism of type II topoisomerases, which are an important class of drug targets. Our results suggest that DNA knotting alters DNA structure in a way that may drive type II topoisomerase resolution of DNA knots. Ultimately, the knowledge gained about type II topoisomerases and their unknotting mechanism may lead to the development of new drugs and treatments of human infectious diseases and cancer.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-2754
- Format
- Thesis
- Title
- Effective Methods in Intersection Theory and Combinatorial Algebraic Geometry.
- Creator
-
Harris, Corey S. (Corey Scott), Chicken, Eric, Aldrovandi, Ettore, Kim, Kyounghee, Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of...
Show moreHarris, Corey S. (Corey Scott), Chicken, Eric, Aldrovandi, Ettore, Kim, Kyounghee, Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
This dissertation presents studies of effective methods in two main areas of algebraic geometry: intersection theory and characteristic classes, and combinatorial algebraic geometry. We begin in chapter 2 by giving an effective algorithm for computing Segre classes of subschemes of arbitrary projective varieties. The algorithm presented here comes after several others which solve the problem in special cases, where the ambient variety is for instance projective space. To our knowledge, this...
Show moreThis dissertation presents studies of effective methods in two main areas of algebraic geometry: intersection theory and characteristic classes, and combinatorial algebraic geometry. We begin in chapter 2 by giving an effective algorithm for computing Segre classes of subschemes of arbitrary projective varieties. The algorithm presented here comes after several others which solve the problem in special cases, where the ambient variety is for instance projective space. To our knowledge, this is the first algorithm to be able to compute Segre classes in projective varieties with arbitrary singularities. In chapter 3, we generalize an algorithm by Goward for principalization of monomial ideals in nonsingular varieties to work on any scheme of finite type over a field, proving that the more general class of r.c. monomial subschemes in arbitrarily singular varieties can be principalized by a sequence of blow-ups at codimension 2 r.c. monomial centers. The main result of chapter 4 is a classification of the monomial Cremona transformations of the plane up to conjugation by certain linear transformations. In particular, an algorithm for enumerating all such maps is derived. In chapter 5, we study the multiview varieties and compute their Chern-Mather classes. As a corollary we derive a polynomial formula for their Euclidean distance degree, partially addressing a conjecture of Draisma et al. [35]. In chapter 6, we discuss the classical problem of counting planes tangent to general canonical sextic curves at three points. We investigate the situation for real and tropical sextics. In chapter 6, we explicitly compute equations of an Enriques surface via the involution on a K3 surface.
Show less - Date Issued
- 2017
- Identifier
- FSU_2017SP_Harris_fsu_0071E_13829
- Format
- Thesis
- Title
- Efficient and Accurate Numerical Schemes for Long Time Statistical Properties of the Infinite Prandtl Number Model for Convection.
- Creator
-
Woodruff, Celestine, Wang, Xiaoming, Sang, Qing-Xiang Amy, Case, Bettye Anne, Ewald, Brian D., Gunzburger, Max D., Florida State University, College of Arts and Sciences,...
Show moreWoodruff, Celestine, Wang, Xiaoming, Sang, Qing-Xiang Amy, Case, Bettye Anne, Ewald, Brian D., Gunzburger, Max D., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
In our work we analyze and implement numerical schemes for the infinite Prandtl number model for convection. This model describes the convection that is a potential driving force behind the flow and temperature of the Earth's mantle. There are many schemes available, but most are given with no mention of their ability to adequately capture the long time statistical properties of the model. We investigate schemes with the potential to actually capture these statistics. We further show...
Show moreIn our work we analyze and implement numerical schemes for the infinite Prandtl number model for convection. This model describes the convection that is a potential driving force behind the flow and temperature of the Earth's mantle. There are many schemes available, but most are given with no mention of their ability to adequately capture the long time statistical properties of the model. We investigate schemes with the potential to actually capture these statistics. We further show numerically that our schemes align with current knowledge of the model's characteristics at low Rayleigh numbers.
Show less - Date Issued
- 2015
- Identifier
- FSU_2015fall_Woodruff_fsu_0071E_12813
- Format
- Thesis
- Title
- An Electrophysiological and Mathematical Modeling Study of Developmental and Sex Effects on Neurons of the Zebra Finch Song System.
- Creator
-
Diaz, Diana Lissett Flores, Bertram, R. (Richard), Fadool, Debra Ann, Hyson, Richard L., Jain, Harsh Vardhan, Johnson, Frank (Professor of Psychology), Mio, Washington, Florida...
Show moreDiaz, Diana Lissett Flores, Bertram, R. (Richard), Fadool, Debra Ann, Hyson, Richard L., Jain, Harsh Vardhan, Johnson, Frank (Professor of Psychology), Mio, Washington, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Learned motor patterns such as speaking, playing musical instruments and dancing require a defined sequence of movements. The mechanism of acquiring and perfecting these types of learned behaviors involve a highly complex neurological process not exclusive to humans. In fact, vocal learning in songbirds is a well-known model to study the neural basis of motor learning, particularly human speech acquisition. In this dissertation, I explored differences in the intrinsic physiology of vocal...
Show moreLearned motor patterns such as speaking, playing musical instruments and dancing require a defined sequence of movements. The mechanism of acquiring and perfecting these types of learned behaviors involve a highly complex neurological process not exclusive to humans. In fact, vocal learning in songbirds is a well-known model to study the neural basis of motor learning, particularly human speech acquisition. In this dissertation, I explored differences in the intrinsic physiology of vocal cortex neurons – which underlie song acquisition and production in the zebra finch (Taeniopygia guttata) – as a function of age, sex, and experience using a combination of electrophysiology and mathematical modeling. Using three developmental time points in male zebra finches, Chapter 3 presents evidence of intrinsic plasticity in vocal cortex neurons during vocal learning. The experimental results in this chapter revealed age- and possibly learning-related changes in the physiology of these neurons, while the mathematical models suggest possible variations in both the expression and kinetics of several ion channels that cause the physiological changes. Exploiting the fact that male zebra finches exhibit auditory and vocal song learning, while females exhibit auditory song learning only, in Chapter 4 I compared the physiology of vocal cortex neurons between sexes. This comparison reveals aspects of the neurons’ physiology specialized for singing (males only) vs. auditory learning of song (both males and females). Finally, in Chapter 4 I explored the effect of auditory learning in the physiology of vocal cortex neurons in females. Experimental results and mathematical models revealed regulation in ion channel expression due to auditory learning. In summary, this dissertation describes the effect of three new variables – age, sex, and experience – now known to influence the physiology of key neurons in vocal learning.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Diaz_fsu_0071E_14037
- Format
- Thesis