Current Search: Numerical analysis (x)
Search results
Pages
 Title
 Quasirandom Optimization.
 Creator

Azoulay, Ariel, Peterson, Janet, Gunzburger, Max, Erlebacher, Gordon, Burkardt, John, Department of Scientific Computing, Florida State University
 Abstract/Description

In this work we apply quasirandom sequences to develop a derivativefree algorithm for approximating the global maximum of a given function. This work is based on previous results which used a single type of quasirandom sequence in a Brute Force approach and in an approach called Localization of Search. In this work we present several methods for computing quasirandom sequences as well as measures for determining their properties. We discuss the shortcomings of the Brute Force and...
Show moreIn this work we apply quasirandom sequences to develop a derivativefree algorithm for approximating the global maximum of a given function. This work is based on previous results which used a single type of quasirandom sequence in a Brute Force approach and in an approach called Localization of Search. In this work we present several methods for computing quasirandom sequences as well as measures for determining their properties. We discuss the shortcomings of the Brute Force and Localization of Search methods and then present modifications which address these issues which culminate in a new algorithm which we call Modified Localization of Search. Our algorithm is applied to a test suite of problems and the results are discussed. Finally we present some comments on code development for our algorithm.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd0271
 Format
 Thesis
 Title
 Riemannian Manifold TrustRegion Methods with Applications to Eigenproblems.
 Creator

Baker, Christopher Grover, Gallivan, Kyle, Absil, PierreAntoine, Krothapalli, Anjaneyulu, Erlebacher, Gordon, Srivastava, Anuj, Hussaini, Yousuﬀ, Department of Scientific...
Show moreBaker, Christopher Grover, Gallivan, Kyle, Absil, PierreAntoine, Krothapalli, Anjaneyulu, Erlebacher, Gordon, Srivastava, Anuj, Hussaini, Yousuﬀ, Department of Scientific Computing, Florida State University
Show less  Abstract/Description

This thesis presents and evaluates a generic algorithm for incrementally computing the dominant singular subspaces of a matrix. The relationship between the generality of the results and the necessary computation is explored, and it is shown that more efficient computation can be obtained by relaxing the algebraic constraints on the factoriation. The performance of this method, both numerical and computational, is discussed in terms of the algorithmic parameters, such as block size and...
Show moreThis thesis presents and evaluates a generic algorithm for incrementally computing the dominant singular subspaces of a matrix. The relationship between the generality of the results and the necessary computation is explored, and it is shown that more efficient computation can be obtained by relaxing the algebraic constraints on the factoriation. The performance of this method, both numerical and computational, is discussed in terms of the algorithmic parameters, such as block size and acceptance threshhold. Bounds on the error are presented along with a posteriori approximations of these bounds. Finally, a group of methods are proposed which iteratively improve the accuracy of computed results and the quality of the bounds.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0926
 Format
 Thesis
 Title
 Spherical Centroidal Voronoi Tessellations: Point Generation and Density Functions via Images.
 Creator

Womeldorff, Geoffrey A., Gunzburger, Max, Peterson, Janet, Erlebacher, Gordon, Department of Scientific Computing, Florida State University
 Abstract/Description

This thesis presents and investigates ideas for improvement of the creation of quality centroidal voronoi tessellations on the sphere (SCVT). First, we discuss the theory of CVTs in general, and specifically on the sphere. Subsequently we consider the iterative processes, such as Lloyd's algorithm, which are used to construct them. Following this, we examine and introduce different schemes for creating their input values, known as generators, and compare the effects of these different initial...
Show moreThis thesis presents and investigates ideas for improvement of the creation of quality centroidal voronoi tessellations on the sphere (SCVT). First, we discuss the theory of CVTs in general, and specifically on the sphere. Subsequently we consider the iterative processes, such as Lloyd's algorithm, which are used to construct them. Following this, we examine and introduce different schemes for creating their input values, known as generators, and compare the effects of these different initial points with respect to their ability to converge and the amount of work required to meet a given tolerance goal. In addition, we describe a method for density functions via images so that we can shape generator density in an intuitive manner and then implement this method with examples to demonstrate it's efficacy.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0843
 Format
 Thesis
 Title
 Improvements in Metadynamics Simulations: The Essential Energy Space Random Walk and the WangLandau Recursion.
 Creator

Liu, Yusong, Yang, Wei, Erlebacher, Gordon, Peterson, Janet, Department of Scientific Computing, Florida State University
 Abstract/Description

Metadynamics is a popular tool to explore free energy landscapes and it has been use to elucidate various chemical or biochemical processes. The height of updating Gaussian function is very important for proper free energy convergence to the target free energy surface. Both higher and lower Gaussian heights have advantages and disadvantages, a balance is required. This thesis presents the implementation of the WangLandau recursion scheme in metadynamics simulations to adjust the height of...
Show moreMetadynamics is a popular tool to explore free energy landscapes and it has been use to elucidate various chemical or biochemical processes. The height of updating Gaussian function is very important for proper free energy convergence to the target free energy surface. Both higher and lower Gaussian heights have advantages and disadvantages, a balance is required. This thesis presents the implementation of the WangLandau recursion scheme in metadynamics simulations to adjust the height of the unit Gaussian function. Compared with classical fixed Gaussian heights, this dynamic adjustable method was demonstrated to efficiently yield better converged free energy surfaces. In addition, through combination with the realization of an energy space random walk, the WangLandau recursion scheme can be readily used to deal with the pseudoergodicity problem in molecular dynamic simulations. The use of this scheme is proven to efficiently and robustly obtain a biased free energy function within this thesis.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd1161
 Format
 Thesis
 Title
 A GISBased Model for Estimating Nitrate Fate and Transport from Septic Systems in Surficial Aquifers.
 Creator

Rios, J. Fernando, Ye, Ming, Peterson, Janet, Shanbhag, Sachin, Wilgenbusch, James, Department of Scientific Computing, Florida State University
 Abstract/Description

Estimating groundwater nitrate fate and transport is an important task in water resources and environmental management because excess nitrate loads may have negative impacts on human and environmental health. This work discusses the development of a simplified nitrate transport model and its implementation as a geographic information system (GIS)based screening tool, whose purpose is to estimate nitrate loads to surface water bodies from onsite wastewatertreatment systems (OWTS). Key...
Show moreEstimating groundwater nitrate fate and transport is an important task in water resources and environmental management because excess nitrate loads may have negative impacts on human and environmental health. This work discusses the development of a simplified nitrate transport model and its implementation as a geographic information system (GIS)based screening tool, whose purpose is to estimate nitrate loads to surface water bodies from onsite wastewatertreatment systems (OWTS). Key features of this project are the reduced data demands due to the use of a simplified model, as well as ease of use compared to traditional groundwater flow and transport models, achieved by embedding the model within a GIS. The simplified conceptual model consists of a simplified groundwater flow model in the surficial aquifer, and a simplified transport model that makes use of an analytical solution to the advectiondispersion equation, used for determining nitrate fate and transport. Denitrification is modeled using first order decay in the analytical solution with the decay constant obtained from literature and/or sitespecific data. The groundwater flow model uses readily available topographic data to approximate the hydraulic gradient, which is then used to calculate seepage velocity magnitude and direction. The flow model is evaluated by comparing the results to a previous numerical modeling study of the U.S. Naval Air Station, Jacksonville (NAS) performed by the USGS. The results show that for areas in the vicinity of the NAS, the model is capable of predicting groundwater travel times from a source to a surface water body to within ±20 years of the USGS model, 75% of the time. The transport model uses an analytical solution based on the one by Domenico and Robbins (1985), the results of which are then further processed so that they may be applied to more general, realworld scenarios. The solution, as well as the processing steps are tested using artificially constructed scenarios, each meant to evaluate a certain aspect of the solution. For comparison purposes, each scenario is solved using a well known numerical contaminant transport model. The results show that the analytical solution provides a reasonable approximation to the numerical result. However, it generally underestimates the concentration distribution to varying degrees depending on choice of parameters, especially along the plume centerline. These results are in agreement with previous studies (Srinivasan et al., 2007; West et al., 2007). The adaptation of the analytical solution to more realistic scenarios results in an adequate approximation to the numerically calculated plume, except in areas near the advection front, where the model produces a plume whose shape differs noticeably from the numerical solution. Load calculations are carried out using a mass balance approach where the system is considered to be in the steady state. The steadystate condition allows for a load estimate by subtracting the mass removal rate due to denitrification from the input mass rate. The input mass rate is calculated by taking into account advection and dispersion while the mass removal rate due to denitrification is calculated from the definition of a first order reaction. Comparison with the synthetic scenarios of the transport model shows that for the test cases, when decay rates are low, the model agrees well with the load calculation from the numerical model. As decay rates increase and the plume becomes shorter, the input load is overestimated by about 9% in the test cases and the mass removed due to denitrification is underestimated by 30% in the worst case. These results are likely due to the underestimation of concentration values by the analytical solution of the transport model.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd1851
 Format
 Thesis
 Title
 Flocking Implementation for the Blender Game Engine.
 Creator

Serrano, Myrna I. Merced, Erlebacher, Gordon, Ye, Ming, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

In this thesis, we discuss the development of a new Boids system that simulates flocking behavior inside the Blender Game Engine and within the framework of the RealTime Par ticles System (RTPS) library developed by Ian Johnson. The collective behavior of Boids is characterized as an emergent behavior caused by following three steering behaviors: sep aration, alignment, and cohesion. The implementation leverages OpenCL to maintain the portability of the Blender across different graphics...
Show moreIn this thesis, we discuss the development of a new Boids system that simulates flocking behavior inside the Blender Game Engine and within the framework of the RealTime Par ticles System (RTPS) library developed by Ian Johnson. The collective behavior of Boids is characterized as an emergent behavior caused by following three steering behaviors: sep aration, alignment, and cohesion. The implementation leverages OpenCL to maintain the portability of the Blender across different graphics cards and operating systems. Bench marks of the RTPSFLOCK system show that our implementation speeds up Blender's original Boids implementation (which only runs outside the game engine) by more than an order of magnitude. We demonstrate our boids system in three ways. First, we illustrate how symmetry of the steering behavior is maintained in time. Second, we consider the behavior of a "swarm of bees" approaching their hive. And third, we simulate the motion of a "crowd" constrained to a twodimensional plane.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd2481
 Format
 Thesis
 Title
 Adaptive Mesh Hydrodynamics of NonSpherical CoreCollapse Supernovae.
 Creator

Guzman, James, Plewa, Tomasz, Hoeﬂich, Peter, Erlebacher, Gordon, Department of Scientific Computing, Florida State University
 Abstract/Description

We study a hydrodynamic evolution of a nonspherical corecollapse supernova in multidimensions. We begin our study from the moment of shock revival and continue for the first week after explosion when expansion of the supernova ejecta becomes homologous. We observe growth and interaction of RichtmyerMeshkov, RayleighTaylor, and Kelvin Helmholtz instabilities resulting in an extensive mixing of the heavy elements throughout the ejecta. We obtain a series of models at progressively higher...
Show moreWe study a hydrodynamic evolution of a nonspherical corecollapse supernova in multidimensions. We begin our study from the moment of shock revival and continue for the first week after explosion when expansion of the supernova ejecta becomes homologous. We observe growth and interaction of RichtmyerMeshkov, RayleighTaylor, and Kelvin Helmholtz instabilities resulting in an extensive mixing of the heavy elements throughout the ejecta. We obtain a series of models at progressively higher resolution and provide preliminary discussion of numerical convergence. Unlike in the previous studies, our computations are performed in a single domain. Periodic mesh mapping is avoided. This is made possible by employing an adaptive mesh refinement strategy in which computational workload (defined as a product of the total number of computational cells and the length of the time step) is monitored and, if necessary, limited. Our results are in overall good agreement with the simulations reported by Kifonidis et al. We demonstrate, however, that the amount of mixing and kinematic properties of radioactive species (i.e. 56Ni) is extremely anisotropic. In particular, we find that the model displays a strong tendency to expand laterally away from the equatorial plane toward the poles. Although this behavior is usually attributed to numerical artifacts characteristic of computations with assumed symmetry (axiseffect), the observed behavior can be attributed to a large heat content of the equatorial regions of the explosion model. Future studies are needed to verify that this explosion model property does not have a systematic character.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3891
 Format
 Thesis
 Title
 Supervised Aggregation of Classifiers Using Artificial Prediction Markets.
 Creator

Lay, Nathan, Barbu, Adrian, MeyerBaese, Anke, Plewa, Tomasz, Department of Scientific Computing, Florida State University
 Abstract/Description

Prediction markets have been demonstrated to be accurate predictors of the outcomes of future events. They have been successfully used to predict the outcomes of sporting events, political elections and even business decisions. Their prediction accuracy has even outperformed the accuracy of other prediction methods such as polling. As an attempt to reproduce their predictive capability, a machine learning model of prediction markets is developed herein for classification. This model is a...
Show morePrediction markets have been demonstrated to be accurate predictors of the outcomes of future events. They have been successfully used to predict the outcomes of sporting events, political elections and even business decisions. Their prediction accuracy has even outperformed the accuracy of other prediction methods such as polling. As an attempt to reproduce their predictive capability, a machine learning model of prediction markets is developed herein for classification. This model is a novel classifier aggregation technique that generalizes linear aggregation techniques. This prediction market aggregation technique is shown to outperform or match Random Forest on both artificial and real data sets. The notion of specialization is also developed and explored herein. This leads to a new kind of classifier referred to as a specialized classifier. These specialized classifiers are shown to improve the accuracy of prediction market aggregation even to perfection.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3219
 Format
 Thesis
 Title
 Effects of Vertical Mixing Closures on North Atlantic Overflow Simulations.
 Creator

Jacobsen, Douglas, Gunzburger, Max, Erlebacher, Gordon, Peterson, Janet, Department of Scientific Computing, Florida State University
 Abstract/Description

We are exploring the effect of using various vertical mixing closures on resolving the physical process known as overflow. This is when cold dense water overflows from a basin in the ocean. This process is responsible for the majority of the Ocean's dense water transport, and also creates many of the dense water currents that are part of what is known as the Ocean Conveyor Belt. One of the main places this happens is in the North Atlantic, in the Denmark strait and the Faroe Bank Sea Channel....
Show moreWe are exploring the effect of using various vertical mixing closures on resolving the physical process known as overflow. This is when cold dense water overflows from a basin in the ocean. This process is responsible for the majority of the Ocean's dense water transport, and also creates many of the dense water currents that are part of what is known as the Ocean Conveyor Belt. One of the main places this happens is in the North Atlantic, in the Denmark strait and the Faroe Bank Sea Channel. To simulate this process, two ocean models are used, the Parallel Ocean Program (POP) and the hybridcoordinate Parallel Ocean Program (HyPOP). Using these models, differences are observed in three main vertical mixing schemes Constant, Richardson Number, and KPP. Though, not included in this thesis the research also explores three different vertical griding schemes, ZGrid, Sigma Coordinate, and Isopycnal grids. The goal is to attempt to determine which combination gives the most acceptable results for resolving the overflow process. This is motivated by the large role this process plays in the ocean, as well as the difficulty in modeling this process. If an ocean model cannot accurately simulate overflow, then a large portion of the ocean model will be incorrect and one cannot hope to get reasonable results for long simulations out of it.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3745
 Format
 Thesis
 Title
 Parallel Grid Generation and MultiResolution Methods for Climate Modeling Applications.
 Creator

Jacobsen, Douglas W. (Douglas William), Gunzburger, Max, Nof, Doron, Peterson, Janet, Erlebacher, Gordon, Navon, Michael, Burkardt, John, Ringler, Todd, Department of Scientific...
Show moreJacobsen, Douglas W. (Douglas William), Gunzburger, Max, Nof, Doron, Peterson, Janet, Erlebacher, Gordon, Navon, Michael, Burkardt, John, Ringler, Todd, Department of Scientific Computing, Florida State University
Show less  Abstract/Description

Spherical centroidal Voronoi tessellations (SCVT) are used in many applications in a variety of fields, one being climate modeling. They are a natural choice for spatial discretizations on the surface of the Earth. New modeling techniques have recently been developed that allow the simulation of ocean and atmosphere dynamics on arbitrarily unstructured meshes, including SCVTs. Creating ultrahigh resolution SCVTs can be computationally expensive. A newly developed algorithm couples current...
Show moreSpherical centroidal Voronoi tessellations (SCVT) are used in many applications in a variety of fields, one being climate modeling. They are a natural choice for spatial discretizations on the surface of the Earth. New modeling techniques have recently been developed that allow the simulation of ocean and atmosphere dynamics on arbitrarily unstructured meshes, including SCVTs. Creating ultrahigh resolution SCVTs can be computationally expensive. A newly developed algorithm couples current algorithms for the generation of SCVTs with existing computational geometry techniques to provide the parallel computation of SCVTs and spherical Delaunay triangulations. Using this new algorithm, computing spherical Delaunay triangulations shows a speed up on the order of 4000 over other well known algorithms, when using 42 processors. As mentioned previously, newly developed numerical models allow the simulation of ocean and atmosphere systems on arbitrary Voronoi meshes providing a multiresolution modeling framework. A multiresolution grid allows modelers to provide areas of interest with higher resolution with the hopes of increasing accuracy. However, one method of providing higher resolution lowers the resolution in other areas of the mesh which could potentially increase error. To determine the effect of multiresolution meshes on numerical simulations in the shallowwater context, a standard set of shallowwater test cases are explored using the Model for Prediction Across Scales (MPAS), a new modeling framework jointly developed by the Los Alamos National Laboratory and the National Center for Atmospheric Research. An alternative approach to multiresolution modeling is Adaptive Mesh Refinement (AMR). AMR typically uses information about the simulation to determine optimal locations for degrees of freedom, however standard AMR techniques are not well suited for SCVT meshes. In an effort to solve this issue, a framework is developed to allow AMR simulations on SCVT meshes within MPAS. The resulting research contained in this dissertation ties together a newly developed parallel SCVT generator with a numerical method for use on arbitrary Voronoi meshes. Simulations are performed within the shallowwater context. New algorithms and frameworks are described and benchmarked.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd3743
 Format
 Thesis
 Title
 StressDriven Surface Instabilities in Epitaxial Thin Films.
 Creator

Henke, Steven F., ElAzab, Anter, Erlebacher, Gordon, Department of Scientific Computing, Florida State University
 Abstract/Description

Heteroepitaxial thin films are essential components in many technological applications including optical, electronic and other functional devices. These films are also becoming important in the coating technologies for hightemperature materials applications. Typical heteroepitaxial systems involve one or more solid phases deposited on support structure called the substrate. Often the lattice and thermal mismatch in these systems results in significant elastic strains that, under the...
Show moreHeteroepitaxial thin films are essential components in many technological applications including optical, electronic and other functional devices. These films are also becoming important in the coating technologies for hightemperature materials applications. Typical heteroepitaxial systems involve one or more solid phases deposited on support structure called the substrate. Often the lattice and thermal mismatch in these systems results in significant elastic strains that, under the appropriate temperature conditions, drive mass transport by diffusion. Surface diffusion in these systems is usually a dominant mass transport mechanism that leads to morphological evolution of the surface. This evolution is called stressdriven morphological growth, and it has received much attention by materials modelers. In the current work, the problem of stressdriven morphological evolution in strained thin films is revisited; we develop a generalized formulation of this problem in the nonlinear regime based upon a curvilinear coordinate formalism and finite element solution of the elastic subproblem. This combination of methods facilitates the analysis of the onset of the instability and the early stage temporal evolution of the film surface. We apply our numerical scheme to surface wave, dot, pit, and ring morphologies and demonstrate the effects of model parameters on the incipient instabilities.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd4126
 Format
 Thesis
 Title
 Integrating TwoWay Interaction Between Fluids and Rigid Bodies in the RealTime Particle Systems Library.
 Creator

Young, Andrew S., Erlebacher, Gordon, Plewa, Tomasz, Shanbhag, Sachin, Department of Scientific Computing, Florida State University
 Abstract/Description

In the last 15 years, Video games have become a dominate form of entertainment. The popularity of video games means children are spending more of their free time play video games. Usually, the time spent on homework or studying is decreased to allow for the extended time spent on video games. In an effort to solve the problem, researchers have begun creating educational video games. Some studies have shown a significant increase in learning ability from video games or other interactive...
Show moreIn the last 15 years, Video games have become a dominate form of entertainment. The popularity of video games means children are spending more of their free time play video games. Usually, the time spent on homework or studying is decreased to allow for the extended time spent on video games. In an effort to solve the problem, researchers have begun creating educational video games. Some studies have shown a significant increase in learning ability from video games or other interactive instruction. Educational games can be used in conjunction with formal educational methods to improve the retention among students. To facilitate the creation of games for science education, the RTPS library was created by Ian Johnson to simulate fluid dynamics in realtime. This thesis seeks to extend the RTPS library, to provide more realistic simulations. Rigid body dynamics have been added to the simulation framework. In addition, a twoway coupling between the rigid bodies and fluids have been implemented. Another contribution to the library, was the addition of fluid surface rendering to provide a more realistic looking simulation. Finally, a Qt interface was added to allow for modification of simulation parameters in realtime. In order to perform these simulations in realtime one must have a significant amount of computational power. Though processing power has seen consistent growth for many years, the demands for higher performance desktops grew faster than CPUs could satisfy. In 2006, general purpose graphics processing(GPGPU) was introduced with the CUDA programming language. This new language allowed developers access to an incredible amount of processing power. Some researchers were reporting up to 10 times speedups over a CPU. With this power, one can perform simulations on their desktop computers that were previously only feasible on super computers. GPGPU technology is utilized in this thesis to enable realtime simulations.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5463
 Format
 Thesis
 Title
 SparseGrid Methods for Several Types of Stochastic Differential Equations.
 Creator

Zhang, Guannan, Gunzburger, Max D., Wang, Xiaoming, Peterson, Janet, Wang, Xiaoqiang, Ye, Ming, Webster, Clayton, Burkardt, John, Department of Scientific Computing, Florida...
Show moreZhang, Guannan, Gunzburger, Max D., Wang, Xiaoming, Peterson, Janet, Wang, Xiaoqiang, Ye, Ming, Webster, Clayton, Burkardt, John, Department of Scientific Computing, Florida State University
Show less  Abstract/Description

This work focuses on developing and analyzing novel, efficient sparsegrid algorithms for solving several types of stochastic ordinary/partial differential equations and corresponding inverse problem, such as parameter identification. First, we consider linear parabolic partial differential equations with random diffusion coefficients, forcing term and initial condition. Error analysis for a stochastic collocation method is carried out in a wider range of situations than previous literatures,...
Show moreThis work focuses on developing and analyzing novel, efficient sparsegrid algorithms for solving several types of stochastic ordinary/partial differential equations and corresponding inverse problem, such as parameter identification. First, we consider linear parabolic partial differential equations with random diffusion coefficients, forcing term and initial condition. Error analysis for a stochastic collocation method is carried out in a wider range of situations than previous literatures, including input data that depend nonlinearly on the random variables and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate the exponential decay of the interpolation error in the probability space for both semidiscrete and fullydiscrete solutions. Second, we consider multidimensional backward stochastic differential equations driven by a vector of white noise. A sparsegrid scheme are proposed to discretize the target equation in the multidimensional timespace domain. In our scheme, the time discretization is conducted by the multistep scheme. In the multidimensional spatial domain, the conditional mathematical expectations derived from the original equation are approximated using sparsegrid GaussHermite quadrature rule and adaptive hierarchical sparsegrid interpolation. Error estimates are rigorously proved for the proposed fullydiscrete scheme for multidimensional BSDEs with certain types of simplified generator functions. Third, we investigate the propagation of input uncertainty through nonlocal diffusion models. Since the stochastic local diffusion equations, e.g. heat equations, have already been well studied, we are interested in extending the existing numerical methods to solve nonlocal diffusion problems. In this work, we use sparsegrid stochastic collocation method to solve nonlocal diffusion equations with colored noise and MonteCarlo method to solve the ones with white noise. Our numerical experiments show that the existing methods can achieve the desired accuracy in the nonlocal setting. Moreover, in the white noise case, the nonlocal diffusion operator can reduce the variance of the solution because the nonlocal diffusion operator has "smoothing" effect on the random field. At last, stochastic inverse problem is investigated. We propose sparsegrid Bayesian algorithm to improve the efficiency of the classic Bayesian methods. Using sparsegrid interpolation and integration, we construct a surrogate posterior probability density function and determine an appropriate alternative density which can capture the main features of the true PPDF to improve the simulation efficiency in the framework of indirect sampling. By applying this method to a groundwater flow model, we demonstrate its better accuracy when compared to bruteforce MCMC simulation results.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5298
 Format
 Thesis
 Title
 Solution of the NavierStokes Equations by the Finite Element Method Using Reduced Order Modeling.
 Creator

Forinash, Nick, Peterson, Janet, Plewa, Tomasz, Shanbhag, Sachin, Department of Scientific Computing, Florida State University
 Abstract/Description

Reduced Order Models (ROM) provide a lowdimensional alternative form of a system of differential equations. Such a form permits faster computation of solutions. In this paper, Poisson's Equation in two dimensions, the Heat Equation in one dimension, and a Nonlinear ReactionDiffusion equation in one dimension are solved using the Galerkin formulation of the Finite Element Method (FEM) in conjunction with Newton's Method. Reduced Order Modeling (ROM) by Proper Orthogonal Decomposition (POD)...
Show moreReduced Order Models (ROM) provide a lowdimensional alternative form of a system of differential equations. Such a form permits faster computation of solutions. In this paper, Poisson's Equation in two dimensions, the Heat Equation in one dimension, and a Nonlinear ReactionDiffusion equation in one dimension are solved using the Galerkin formulation of the Finite Element Method (FEM) in conjunction with Newton's Method. Reduced Order Modeling (ROM) by Proper Orthogonal Decomposition (POD) is then used to accelerate the solution of successive linear systems required by Newton's Method. This is done to show the viability of the method on a simple problem. The NavierStokes (NS) Equations are introduced and solved by FEM. A ROM using both POD and clustering by Centroidal Voronoi Tesselation (CVT) are then used to solve the NS equations, and the results are compared with the FEM solution. The specific NS problem we consider has inhomogeneous Dirichlet boundary conditions and the treatment of the boundary conditions is explained. The resulting decrease in computation time required for solving the various equations are compared with ROM methods.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5352
 Format
 Thesis
 Title
 Construction of Delaunay Triangulations on the Sphere: A Parallel Approach.
 Creator

Larrea, Veronica G. Vergara, Gunzburger, Max, MeyerBaese, Anke, Peterson, Janet, Wilgenbusch, Jim, Department of Scientific Computing, Florida State University
 Abstract/Description

This thesis explores possible improvements in the construction of Delaunay Triangulations on the Sphere by designing and implementing a parallel alternative to the software package STRIPACK. First, it gives an introduction to Delaunay Triangulations on the plane and presents current methods available for their construction. Then, these concepts are mapped to the spherical case: Spherical Delaunay Triangulation (SDT). To provide a better understanding of the design choices, this document...
Show moreThis thesis explores possible improvements in the construction of Delaunay Triangulations on the Sphere by designing and implementing a parallel alternative to the software package STRIPACK. First, it gives an introduction to Delaunay Triangulations on the plane and presents current methods available for their construction. Then, these concepts are mapped to the spherical case: Spherical Delaunay Triangulation (SDT). To provide a better understanding of the design choices, this document includes a brief overview of parallel programming, that is followed by the details of the implementation of the SDT generation code. In addition, it provides examples of resulting SDTs as well as benchmarks to analyze its performance. This project was inspired by the concepts presented in Robert Renka's work and was implemented in C++ using MPI.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4557
 Format
 Thesis
 Title
 Spherical Centroidal Voronoi Tessellation Based Unstructured Meshes for Multidomain Multiphysics Applications.
 Creator

Womeldorff, Geoffrey A., Gunzburger, Max, Peterson, Janet, Gallivan, Kyle, Erlebacher, Gordon, Wang, Xiaoqiang, Ringler, Todd, Department of Scientific Computing, Florida State...
Show moreWomeldorff, Geoffrey A., Gunzburger, Max, Peterson, Janet, Gallivan, Kyle, Erlebacher, Gordon, Wang, Xiaoqiang, Ringler, Todd, Department of Scientific Computing, Florida State University
Show less  Abstract/Description

This dissertation presents and investigates ideas for improvement of the creation of quality centroidal voronoi tessellations on the sphere (SCVT) which are to be used for multiphysics, multidomain applications. As an introduction, we discuss grid generation on the sphere in a broad fashion. Next, we discuss the theory of CVTs in general, and specifically on the sphere. Subsequently we consider the iterative processes, such as Lloyd's algorithm, which are used to construct them. Following...
Show moreThis dissertation presents and investigates ideas for improvement of the creation of quality centroidal voronoi tessellations on the sphere (SCVT) which are to be used for multiphysics, multidomain applications. As an introduction, we discuss grid generation on the sphere in a broad fashion. Next, we discuss the theory of CVTs in general, and specifically on the sphere. Subsequently we consider the iterative processes, such as Lloyd's algorithm, which are used to construct them. Following this, we describe a method for density functions via images so that we can shape generator density in an intuitive, yet arbitrary, manner, and then a method by which SCVTs can be easily adapted to conform to arbitrary sets of line segments, or shorelines. Then, we discuss sample meshes, used for various physical and nonphysical applications. Penultimately, we discuss two sample applications, as a proof of concept, where we adapt the Shallow Water Model from Model for Predictions Across Scales (MPAS) to use our grids for a more accurate border, and we also discuss elliptic interface problems both with and without hanging nodes. Finally, we share a few concluding remarks.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd5250
 Format
 Thesis
 Title
 Numerical Implementation of Continuum Dislocation Theory.
 Creator

Xia, Shengxu, ElAzab, Anter, Plewa, Tomasz, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

This thesis aims at theoretical and computational modeling of the continuum dislocation theory coupled with its internal elastic field. In this continuum description, the spacetime evolution of the dislocation density is governed by a set of hyperbolic partial differential equations. These PDEs must be complemented by elastic equilibrium equations in order to obtain the velocity field that drives dislocation motion on slip planes. Simultaneously, the plastic eigenstrain tensor that serves as...
Show moreThis thesis aims at theoretical and computational modeling of the continuum dislocation theory coupled with its internal elastic field. In this continuum description, the spacetime evolution of the dislocation density is governed by a set of hyperbolic partial differential equations. These PDEs must be complemented by elastic equilibrium equations in order to obtain the velocity field that drives dislocation motion on slip planes. Simultaneously, the plastic eigenstrain tensor that serves as a known field in equilibrium equations should be updated by the motion of dislocations according to Orowan's law. Therefore, a stress dislocation coupled process is involved when a crystal undergoes elastoplastic deformation. The solutions of equilibrium equation and dislocation density evolution equation are tested by a few examples in order to make sure appropriate computational schemes are selected for each. A coupled numerical scheme is proposed, where resolved shear stress and Orowan's law are two passages that connect these two sets of PDEs. The numerical implementation of this scheme is illustrated by an example that simulates the recovery process of a dislocated cubic crystal. The simulated result demonstrates the possibility to couple macroscopic(stress) and microscopic(dislocation density tensor) physical quantity to obtain crystal mechanical response.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd5280
 Format
 Thesis
 Title
 Practical Optimization Algorithms in the Data Assimilation of LargeScale Systems with NonLinear and NonSmooth Observation Operators.
 Creator

Steward, Jeffrey L. (Jeffrey Lawrence), Navon, Ionel Michael, Liu, Guosheng, Gunzburger, Max, Erlebacher, Gordon, Zupanski, Milijia, Karmitsa, Napsu, Department of Scientific...
Show moreSteward, Jeffrey L. (Jeffrey Lawrence), Navon, Ionel Michael, Liu, Guosheng, Gunzburger, Max, Erlebacher, Gordon, Zupanski, Milijia, Karmitsa, Napsu, Department of Scientific Computing, Florida State University
Show less  Abstract/Description

This dissertation compares and contrasts largescale optimization algorithms in the use of variational and sequential data assimilation on two novel problems chosen to highlight the challenges in nonlinear and nonsmooth data assimilation. The first problem explores the impact of a highly nonlinear observation operator and highlights the importance of background information on the data assimilation problem. The second problem tackles largescale data assimilation with a nonsmooth...
Show moreThis dissertation compares and contrasts largescale optimization algorithms in the use of variational and sequential data assimilation on two novel problems chosen to highlight the challenges in nonlinear and nonsmooth data assimilation. The first problem explores the impact of a highly nonlinear observation operator and highlights the importance of background information on the data assimilation problem. The second problem tackles largescale data assimilation with a nonsmooth observation operator. Together, these two cases show both the importance of choosing an appropriate data assimilation method and, when a variational or variationallyinspired method is chosen, the importance of choosing the right optimization algorithm for the problem at hand.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5203
 Format
 Thesis
 Title
 A SenderCentric Approach to Spam and Phishing Control.
 Creator

Sanchez, Fernando X. (Fernando Xavier), Duan, Zhenhai, Niu, Xufeng, Yuan, Xin, Aggarwal, Sudhir, Department of Scientific Computing, Florida State University
 Abstract/Description

The Internet email system as a popular online communication tool has been increasingly misused by illwilled users to carry out malicious activities including spamming and phishing. Alarmingly, in recent years the nature of the emailbased malicious activities has evolved from being purely annoying (with the notorious example of spamming) to being criminal (with the notorious example of phishing). Despite more than a decade of antispam and antiphishing research and development efforts, both...
Show moreThe Internet email system as a popular online communication tool has been increasingly misused by illwilled users to carry out malicious activities including spamming and phishing. Alarmingly, in recent years the nature of the emailbased malicious activities has evolved from being purely annoying (with the notorious example of spamming) to being criminal (with the notorious example of phishing). Despite more than a decade of antispam and antiphishing research and development efforts, both the sophistication and volume of spam and phishing messages on the Internet have continuously been on the rise over the years. A key difficulty in the control of emailbased malicious activities is that malicious actors have great operational flexibility in performing emailbased malicious activities, in terms of both the email delivery infrastructure and email content; moreover, existing antispam and antiphishing measures allow for arms race between malicious actors and the antispam and antiphishing community. In order to effectively control emailbased malicious activities such as spamming and phishing, we argue that we must limit (and ideally, eliminate) the operational flexibility that malicious actors have enjoyed over the years. In this dissertation we develop and evaluate a sendercentric approach (SCA) to addressing the problem of emailbased malicious activities so as to control spam and phishing emails on the Internet. SCA consists of three complementary components, which together greatly limit the operational flexibility of malicious actors in sending spam and phishing emails. The first two components of SCA focus on limiting the infrastructural flexibility of malicious actors in delivering emails, and the last component focuses on on limiting the flexibility of malicious actors in manipulating the content of emails. In the first component of SCA, we develop a machinelearning based system to prevent malicious actors from utilizing compromised machines to send spam and phishing emails. Given that the vast majority of spam and phishing emails are delivered via compromised machines on the Internet today, this system can greatly limit the infrastructural flexibility of malicious actors. Ideally, malicious actors should be forced to send spam and phishing messages from their own machines so that blacklists and reputationbased systems can be effectively used to block spam and phishing emails. The machinelearning based system we develop in this dissertation is a critical step towards this goal. In recent years, malicious actors also started to employ advanced techniques to hijack network prefixes in conducting emailbased malicious activities, which makes the control and attribution of spam and phishing emails even harder. In the second component of SCA, we develop a practical approach to improve the security of the Internet interdomain routing protocol BGP. Given that the key difficulties in adopting any mechanism to secure the Internet interdomain routing are the overhead and incremental deployment property of the mechanism, our scheme is designed to have minimum overhead and it can be incrementally deployed by individual networks on the Internet to protect themselves (and their customer networks), so that individual networks have incentives to deploy the scheme. In addition to the infrastructural flexibility in delivering spam and phishing emails, malicious actors have enormous flexibility in manipulating the format and content of email messages. In particular, malicious actors can forge phishing messages as close to legitimate messages in terms of both format and content. Although malicious actors have immense power in manipulating the format and content of phishing emails, they cannot completely hide how a message is delivered to the recipients. Based on this observation, in the last component of SCA, we develop a system to identify phishing emails based on the sender related information instead of the format or content of email messages. Together, the three complementary components of SCA will greatly limit the operational flexibility and capability that malicious actors have enjoyed over the years in delivering spam and phishing emails, and we believe that SCA will make a significant contribution towards addressing the spam and phishing problem on the Internet.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd5163
 Format
 Thesis
 Title
 Barrier Island Responses to Storms and SeaLevel Rise: Numerical Modeling and Uncertainty Analysis.
 Creator

Dai, Heng, Ye, Ming, Slice, Dennis, Plewa, Tomasz, Department of Scientific Computing, Florida State University
 Abstract/Description

In response to potential increasing rate of sealevel rise, planners and engineers are making accommodations in their management plans for protection of coastal infrastructure and natural resources. Dunes and barrier islands are important for coastal protection and restoration, because they absorb storm energy and play an essential role in sediment transportation. Most of traditional coastal models do not simulate joint evolution of dunes and barrier islands and do not explicitly address sea...
Show moreIn response to potential increasing rate of sealevel rise, planners and engineers are making accommodations in their management plans for protection of coastal infrastructure and natural resources. Dunes and barrier islands are important for coastal protection and restoration, because they absorb storm energy and play an essential role in sediment transportation. Most of traditional coastal models do not simulate joint evolution of dunes and barrier islands and do not explicitly address sealevel rise. A new model was developed in this study that represents basic barrier island processes under sealevel rise and links dynamics of different components of barrier islands. The model was used to evaluate nearfuture (100 years) responses of a semisynthetic island, with the characteristics of Santa Rosa Island of Florida, USA, to five rates of sealevel rise. The new model is capable of representing considerable practical information about effects of different sea level rise scenarios on the test island. The modeling results show that different areas and components of the island have different responses to sealevel rise. Depending on the rate of sea level rise and overwash sediment supply, evolution of dunes and barrier islands is important to habitat suitable for coastal birds or to backbarrier salt marshes. The modeling results are inherently uncertain due to unknown storm variability and sealevel rise scenarios. The storm uncertainty, characterized as parametric uncertainty, and its propagation to the modeling results, were assessed using the Monte Carlo (MC) method for the synthetic barrier island. A total of 1000 realizations of storm magnitude, frequency, and track through a barrier island were generated and used for the MC simulation. To address the scenario uncertainty, five sealevel rise scenarios were considered using the current rate and four additional rates that lead to sealevel rise of to 0.5m, 1.0m, 1.5m, and 2.0m in the next 100 years. Parametric uncertainty in the simulated beach dune heights and the backshore positions was assessed for the individual scenarios. For a given scenario, the parametric uncertainty varies with time, becoming larger when time increases. For different sealevel rise scenarios, the parametric uncertainty is different, being larger for more severe sealevel rise. The method of scenario averaging was used to quantify the scenario uncertainty. The scenario averaging results are between the results of smallest and largest sealevel rise scenarios. The results of uncertainty analysis provide guidelines for coastal management and protection of coastal ecology.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4790
 Format
 Thesis
 Title
 Monte Carlo Simulation of Phonon Transport in Uranium Dioxide.
 Creator

Deskins, Walter Ryan, ElAzab, Anter, Plewa, Tomasz, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

Heat is transfered in crystalline semiconductor materials via lattice vibrations. Lattice vibrations are treated with a waveparticle duality just like photons are quantum mechanical representations of electromagnetic waves. The quanta of energy of these lattice waves are called phonons. The Boltzmann Transport Equation (BTE) has proved to be a powerful tool in modeling the phonon heat conduction in crystalline solids. The BTE tracks the phonon number density function as it evolves according...
Show moreHeat is transfered in crystalline semiconductor materials via lattice vibrations. Lattice vibrations are treated with a waveparticle duality just like photons are quantum mechanical representations of electromagnetic waves. The quanta of energy of these lattice waves are called phonons. The Boltzmann Transport Equation (BTE) has proved to be a powerful tool in modeling the phonon heat conduction in crystalline solids. The BTE tracks the phonon number density function as it evolves according to the drift of all phonons and to the phononphonon interactions (or collisions). Unlike Fourier's law which is limited to describing diffusive energy transport, the BTE can accurately predict energy transport in both ballistic (virtually no collisions) and diffuse regimes. Motivated by the need to understand thermal transport in irradiated Uranium Dioxide at the mesoscale, this work investigates phonon transport in UO2 using Monte Carlo simulation. The simulation scheme aims to solve the Boltzmann transport equation for phonons within a relaxation time approximation. In this approximation the Boltzmann transport equation is simplified by assigning time scales to each scattering mechanism associated with phonon interactions. The Monte Carlo method is first benchmarked by comparing to similar models for silicon. Unlike most previous works on solving this equation by Monte Carlo method, the momentum and energy conservation laws for phononphonon interactions in UO2 are treated exactly; in doing so, the magnitude of possible wave vectors and frequency space are all discretized and a numerical routine is then implemented which considers all possible phononphonon interactions and chooses those interactions which obey the conservation laws. The simulation scheme accounts for the acoustic and optical branches of the dispersion relationships of UO2. The six lowest energy branches in the [001] direction are tracked within the Monte Carlo. Because of their predicted low group velocities, the three remaining, highenergy branches are simply treated as a reservoir of phonons at constant energy in Kspace. These phonons contribute to the thermal conductivity only by scattering with the six lower energy branches and not by their group velocities. Using periodic boundary conditions, this work presents results illustrating the diffusion limit of phonon transport in UO2 single crystals, and computes the thermal conductivity of the material in the diffusion limit based on the detailed phonon dynamics. The temperature effect on conductivity is predicted and the results are compared with experimental data available in the literature.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4796
 Format
 Thesis
 Title
 Numerical Methods for Deterministic and Stochastic Nonlocal Problem in Diffusion and Mechanics.
 Creator

Chen, Xi, Gunzburger, Max, Wang, Xiaoming, Peterson, Janet, Wang, Xiaoqiang, Ye, Ming, Burkardt, John, Department of Scientific Computing, Florida State University
 Abstract/Description

In this dissertation, the recently developed peridynamic nonlocal continuum model for solid mechanics is extensively studied, specifically, the numerical methods for the deterministic and stochastic steadystate peridynamics models. In contrast to the classical partial differential equation models, peridynamic model is an integrodifferential equation that does not involve spatial derivatives of the displacement field. As a result, the peridynamic model admits solutions having jump...
Show moreIn this dissertation, the recently developed peridynamic nonlocal continuum model for solid mechanics is extensively studied, specifically, the numerical methods for the deterministic and stochastic steadystate peridynamics models. In contrast to the classical partial differential equation models, peridynamic model is an integrodifferential equation that does not involve spatial derivatives of the displacement field. As a result, the peridynamic model admits solutions having jump discontinuities so that it has been successfully applied to the fracture problems. This dissentation consists of three major parts. The first part focuses on the onedimensional steadystate peridynamics model. Based on a variational formulation, continuous and discontinuous Galerkin finite element methods are developed for the peridynamic model. Optimal convergence rates for different continuous and discontinuous manufactured solutions are obtained. A strategy for identifying the discontinuities of the solution is developed and implemented. The convergence of peridynamics model to classical elasticity model is studied. Some relevant nonlocal problems are also considered. In the second part, we focus on the twodimensional steadystate peridynamics model. Based on the numerical strategies and results from the onedimensional peridynamics model, we developed and implemented the corresponding approaches for the twodimensional case. Optimal convergence rates for different continuous and discontinuous manufactured solutions are obtained. In the third part, we study the stochastic peridynamics model. We focus on a version of peridynamics model whose forcing terms are described by a finitedimensional random vector, which is often called the finitedimensional noise assumption. Monte Carlo methods, stochastic collocation with full tensor product and sparse grid methods based on this stochastic peridynamics model are implemented and compared.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4753
 Format
 Thesis
 Title
 A Computational Method for AgeatDeath Estimation Based on the Pubic Symphysis.
 Creator

Stoyanova, Detelina, Slice, Dennis, Burkardt, John, Ye, Ming, Shanbhag, Sachin, Department of Scientific Computing, Florida State University
 Abstract/Description

A significant component of forensic science is analyzing bones to assess the age at death of an individual. Forensic anthropologists often include the pubic symphysis in such studies. Subjective methods, such as the SucheyBrooks method, are currently used to analyze the pubic symphysis. This thesis examines a more objective, quantitative method. The method analyzes 3D surface scans of the pubic symphysis and implements a thin plate spline algorithm which models the bending of a flat plane to...
Show moreA significant component of forensic science is analyzing bones to assess the age at death of an individual. Forensic anthropologists often include the pubic symphysis in such studies. Subjective methods, such as the SucheyBrooks method, are currently used to analyze the pubic symphysis. This thesis examines a more objective, quantitative method. The method analyzes 3D surface scans of the pubic symphysis and implements a thin plate spline algorithm which models the bending of a flat plane to approximately match the surface of the bone. The algorithm minimizes the bending energy required for this transformation. Results presented here show that there is a correlation between the minimum bending energy and the age at death of the individual. The method could be useful to medicolegal practitioners.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd7010
 Format
 Thesis
 Title
 Computational Modeling of Elastic Fields in Dislocation Dynamics.
 Creator

Mohamed, Mamdouh, ElAzab, Anter, Van Dommelen, Leon, Erlebacher, Gordon, Ye, Ming, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

In the present work, we investigate the internal fields generated by the dislocation structures that form during the deformation of copper single crystals. In particular, we perform computational modeling of the statistical and morphological characteristics of the dislocation structures obtained by dislocation dynamics simulation method and compare the results with Xray microscopy measurements of the same data. This comparison is performed for both the dislocation structure and their...
Show moreIn the present work, we investigate the internal fields generated by the dislocation structures that form during the deformation of copper single crystals. In particular, we perform computational modeling of the statistical and morphological characteristics of the dislocation structures obtained by dislocation dynamics simulation method and compare the results with Xray microscopy measurements of the same data. This comparison is performed for both the dislocation structure and their internal elastic fields for the cases of homogeneous deformation and indentation of copper single crystals. A direct comparison between dislocation dynamics predictions and Xray measurements plays a key role in demonstrating the fidelity of discrete dislocation dynamics as a predictive computational mechanics tool and in understanding the Xray data. For the homogeneous deformation case, dislocation dynamics simulations were performed under periodic boundary conditions and the internal fields of dislocations were computed by solving an elastic boundary value problem of manydislocation system using the finite element method. The distribution and pair correlation functions of all internal elastic fields and the dislocation density were computed. For the internal stress field, the availability of such statistical information paves the way to the development of a densitybased mobility law of dislocations in continuum dislocation dynamics models, by correlating the internalstress statistics with dislocation velocity statistics. The statistical analysis of the lattice rotation and the dislocation density fields in the deformed crystal made possible the direct comparison with Xray measurements of the same data. Indeed, a comparison between the simulation and experimental measurements has been possible, which revealed important aspects of similarity and differences between the simulation results and the experimental data. In the case of indentation, which represents a highly inhomogeneous deformation, a contact boundary value problem was solved in conjunction with a discretedislocation dynamics simulation model; the discrete dislocation dynamics simulation was thus enabled to handle finite domains under mixed traction/displacement boundary conditions. The loaddisplacement curves for indentation experiments were analyzed with regard to cross slip, indentation speed and indenter shape. The lattice distortion fields obtained by indentation simulations were directly compared with their experimental counterparts. Other indentation simulations were also carried out, giving insight into different aspects of microscale indentation deformation.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd6962
 Format
 Thesis
 Title
 Thermal Conductivity and SelfGeneration of Magnetic Fields in Discontinuous Plasmas.
 Creator

Modica, Frank, Plewa, Tomasz, Navon, Michael Ionel, Sussman, Mark, Department of Scientific Computing, Florida State University
 Abstract/Description

Hydrodynamic instabilities are the driving force behind complex fluid processes that occur from everyday scenarios to the most extreme physical conditions of the universe. The RayleighTaylor instability (RTI) develops when a heavy fluid is accelerated by a light fluid, resulting in sinking spikes, rising bubbles, and material mixing. Laser experiments have observed features of RTI that cannot be explained with pure hydrodynamic models. For this computational study we have implemented and...
Show moreHydrodynamic instabilities are the driving force behind complex fluid processes that occur from everyday scenarios to the most extreme physical conditions of the universe. The RayleighTaylor instability (RTI) develops when a heavy fluid is accelerated by a light fluid, resulting in sinking spikes, rising bubbles, and material mixing. Laser experiments have observed features of RTI that cannot be explained with pure hydrodynamic models. For this computational study we have implemented and verified extended physics mod ules for anisotropic thermal conduction and selfgenerated magnetic fields in the FLASH based Proteus code using the Braginskii plasma theory. We have used this code to simulate RTI in a basic plasma physics context. We obtain results up to 35 nanoseconds (ns) at various resolutions and discuss convergence and computational challenges. We find that magnetic fields as high as 110 megagauss (MG) are genereated near the fluid interface. Thermal conduction turns out to be essentially isotropic in these conditions, but plays the dominant role in the evolution of the system by smearing out smallscale structure and reducing the RT growth rate. This may account for the relatively feature less RT spikes seen in experiments. We do not, however, observe mass extensions in our simulations. Without thermal conductivity, the magnetic field has the effect of generating what appears to be an additional RT mode which results in new structure at later times, when compared to pure hydro models. Additional physics modules and 3D simulations are needed to complete our Braginskii model of RTI.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5841
 Format
 Thesis
 Title
 Reduced Order Modeling of Reactive Transport in a Column Using Proper Orthogonal Decomposition.
 Creator

McLaughlin, Benjamin R. S., Peterson, Janet, Ye, Ming, Shanbhag, Sachin, Department of Scientific Computing, Florida State University
 Abstract/Description

Estimating parameters for reactive contaminant transport models can be a very computationally intensive. Typically this involves solving a forward problem many times, with many degrees of freedom that must be computed each time. We show that reduced order modeling (ROM) by proper orthogonal decomposition (POD) can be used to approximate the solution to the forward model using many fewer degrees of freedom. We provide background on the finite element method and reduced order modeling in one...
Show moreEstimating parameters for reactive contaminant transport models can be a very computationally intensive. Typically this involves solving a forward problem many times, with many degrees of freedom that must be computed each time. We show that reduced order modeling (ROM) by proper orthogonal decomposition (POD) can be used to approximate the solution to the forward model using many fewer degrees of freedom. We provide background on the finite element method and reduced order modeling in one spatial dimension, and apply both methods to a system of linear uncoupled timedependent equations simulating reactive transport in a column. By comparing the reduced order and finite element approximations, we demonstrate that the reduced model, while having many fewer degrees of freedom to compute, gives a good approximation of the highdimensional (finite element) model. Our results indicate that one may substitute a reduced model in place of a highdimensional model to solve the forward problem in parameter estimation with many fewer degrees of freedom.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd5030
 Format
 Thesis
 Title
 Assessment of Parameteric and Model Uncertainty in Groundwater Modeling.
 Creator

Lu, Dan, Ye, Ming, Niu, Xufeng, Beerli, Peter, Curtis, Gary, Navon, Michael, Plewa, Tomasz, Department of Scientific Computing, Florida State University
 Abstract/Description

Groundwater systems are open and complex, rendering them prone to multiple conceptual interpretations and mathematical descriptions. When multiple models are acceptable based on available knowledge and data, model uncertainty arises. One way to assess the model uncertainty is postulating several alternative hydrologic models for a site and using model selection criteria to (1) rank these models, (2) eliminate some of them, and/or (3) weight and average predictions statistics generated by...
Show moreGroundwater systems are open and complex, rendering them prone to multiple conceptual interpretations and mathematical descriptions. When multiple models are acceptable based on available knowledge and data, model uncertainty arises. One way to assess the model uncertainty is postulating several alternative hydrologic models for a site and using model selection criteria to (1) rank these models, (2) eliminate some of them, and/or (3) weight and average predictions statistics generated by multiple models based on their model probabilities. This multimodel analysis has led to some debate among hydrogeologists about the merits and demerits of common model selection criteria such as AIC, AICc, BIC, and KIC. This dissertation contributes to the discussion by comparing the abilities of the two common Bayesian criteria (BIC and KIC) theoretically and numerically. The comparison results indicate that, using MCMC results as a reference, KIC yields more accurate approximations of model probability than does BIC. Although KIC reduces asymptotically to BIC, KIC provides consistently more reliable indications of model quality for a range of sample sizes. In the multimodel analysis, the model averaging predictive uncertainty is a weighted average of predictive uncertainties of individual models. So it is important to properly quantify individual model's predictive uncertainty. Confidence intervals based on regression theories and credible intervals based on Bayesian theories are conceptually different ways to quantify predictive uncertainties, and both are widely used in groundwater modeling. This dissertation explores their differences and similarities theoretically and numerically. The comparison results indicate that given Gaussian distributed observation errors, for linear or linearized nonlinear models, linear confidence and credible intervals are numerically identical when consistent prior parameter information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence and credible regions based on approximate likelihood method are used and intrinsic model nonlinearity is small; but they differ in practice due to numerical difficulties in calculating both confidence and credible intervals. Model error is a more vital issue than differences between confidence and credible intervals for individual models, suggesting the importance of considering alternative models. Model calibration results are the basis for the model selection criteria to discriminate between models. However, how to incorporate calibration data errors into the calibration process is an unsettled problem. It has been seen that due to the improper use of the error probability structure in the calibration, the model selection criteria lead to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. This dissertation finds that the errors reflected in the calibration should include two parts, measurement errors and model errors. To consider the probability structure of the total errors, I propose an iterative calibration method with two stages of parameter estimation. The multimodel analysis based on the estimation results leads to more reasonable averaging weights and better averaging predictive performance, compared to those with considering only measurement errors. Traditionally, dataworth analyses have relied on a single conceptualmathematical model with prescribed parameters. Yet this renders model predictions prone to statistical bias and underestimation of uncertainty and thus affects the groundwater management decision. This dissertation proposes a multimodel approach to optimum dataworth analyses that is based on model averaging within a Bayesian framework. The developed multimodel Bayesian approach to dataworth analysis works well in a real geostatistical problem. In particular, the selection of target for additional data collection based on the approach is validated against actual data collected. The last part of the dissertation presents an efficient method of Bayesian uncertainty analysis. While Bayesian analysis is vital to quantify predictive uncertainty in groundwater modeling, its application has been hindered in multimodel uncertainty analysis because of computational cost of numerous models executions and the difficulty in sampling from the complicated posterior probability density functions of model parameters. This dissertation develops a new method to improve computational efficiency of Bayesian uncertainty analysis using sparsegrid method. The developed sparsegridbased method for Bayesian uncertainty analysis demonstrates its superior accuracy and efficiency to classic importance sampling and MCMC sampler when applied to a groundwater flow model.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5003
 Format
 Thesis
 Title
 Feasibility Study of the Standing Accretion Shock Instability Experiment at the National Ignition Facility.
 Creator

Handy, Tim A., Plewa, Tomasz, Erlebacher, Gordon, Navon, Michael, Department of Scientific Computing, Florida State University
 Abstract/Description

The primary hydrodynamic flow feature of early explosion phases of a corecollapse supernova is a spherical shock. This shock is born deep in the central regions of the collapsing stellar core, stalls shortly afterward, and in case of a successful explosion is revived and becomes the supernova shock. The revival process involves a standing accretion shock instability, SASI. This shock instability is considered the key processes aiding the corecollapse supernova (ccSN) explosion. The aim of...
Show moreThe primary hydrodynamic flow feature of early explosion phases of a corecollapse supernova is a spherical shock. This shock is born deep in the central regions of the collapsing stellar core, stalls shortly afterward, and in case of a successful explosion is revived and becomes the supernova shock. The revival process involves a standing accretion shock instability, SASI. This shock instability is considered the key processes aiding the corecollapse supernova (ccSN) explosion. The aim of our study is to identify feasible conditions and parameters for an experimental system that is able to capture the essential characteristics of SASI. We use analytic methods and highresolution hydrodynamic simulations in multidimensions to investigate a possible experimental design on the National Ignition Facility. The experimental configuration involves a steady, spherical shock. We explore a viable region of parameters and obtain limits on the shocked flow geometry. We study the stability properties of the shock and its postshock region. We discuss key differences between the experimental setup and astrophysical environment. The obtained flowfield closely resembles conveging nozzle flow. The postshock region, in contrast to the supernova setting, is found to be stably stratified and insensitive to perturbations upstream of the shock. We conclude that it is not possible to capture the characteristics of the supernova SASI for the converging shocked flow configuration considered here. However, such configuration offers a very stable setting for precision studies of shocked, dense, high temperature plasmas requiring finelycontrolled conditions.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4891
 Format
 Thesis
 Title
 RealTime Particle Systems in the Blender Game Engine.
 Creator

Johnson, Ian, Erlebacher, Gordon, Plewa, Tomasz, ElAzab, Anter, Department of Scientific Computing, Florida State University
 Abstract/Description

Advances in computational power have lead to many developments in science and en tertainment. Powerful simulations which required expensive supercomputers can now be carried out on a consumer personal computer and many children and young adults spend countless hours playing sophisticated computer games. The focus of this research is the development of tools which can help bring the entertaining and appealing traits of video games to scientific education. Video game developers use many tools...
Show moreAdvances in computational power have lead to many developments in science and en tertainment. Powerful simulations which required expensive supercomputers can now be carried out on a consumer personal computer and many children and young adults spend countless hours playing sophisticated computer games. The focus of this research is the development of tools which can help bring the entertaining and appealing traits of video games to scientific education. Video game developers use many tools and programming languages to build their games, for example the Blender 3D content creation suite. Blender includes a Game Engine that can be used to design and develop sophisticated interactive experiences. One important tool in computer graphics and animation is the particle system, which makes simulated effects such as fire, smoke and fluids possible. The particle system available in Blender is unfortunately not available in the Blender Game Engine because it is not fast enough to run in realtime. One of the main factors contributing to the rise in computational power and the increas ing sophistication of video games is the Graphics Processing Unit (GPU). Many consumer personal computers are equipped with powerful GPUs which can be harnassed for general purpose computation. This thesis presents a particle system library is accelerated by the GPU using the OpenCL programming language. The library integrated into the Blender Game Engine providing an interactive platform for exploring fluid dynamics and creating video games with realistic water effects. The primary system implemented in this research is a fluid sim ulator using the Smoothed Particle Hydrodynamics technique for simulating incompressible fluids such as water. The library created for this thesis can simulate water using SPH at 40fps with upwards x ï¿¼ of 100,000 particles on an NVIDIA GTX480 GPU. The fluid system has interactive features such as object collision, and the ability to add and remove particles dynamically. These features as well as phsyical properties of the simulation can be controlled intuitively from the user interface of Blender.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4931
 Format
 Thesis
 Title
 Artificial Prediction Markets for Classification, Regression and Density Estimation.
 Creator

Lay, Nathan, Barbu, Adrian, MeyerBaese, Anke, Sinha, Debajyoti, Ming, Ye, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

Prediction markets are forums of trade where contracts on the future outcomes of events are bought and sold. These contracts reward buyers based on correct predictions and thus give incentive to make accurate predictions. Prediction markets have successfully predicted the outcomes of sporting events, elections, scientific hypothesese, foreign affairs, etc... and have repeatedly demonstrated themselves to be more accurate than individual experts or polling [2]. Since prediction markets are...
Show morePrediction markets are forums of trade where contracts on the future outcomes of events are bought and sold. These contracts reward buyers based on correct predictions and thus give incentive to make accurate predictions. Prediction markets have successfully predicted the outcomes of sporting events, elections, scientific hypothesese, foreign affairs, etc... and have repeatedly demonstrated themselves to be more accurate than individual experts or polling [2]. Since prediction markets are aggregation mechanisms, they have garnered interest in the machine learning community. Artificial prediction markets have been successfully used to solve classification problems [34, 33]. This dissertation explores the underlying optimization problem in the classification market, as presented in [34, 33], proves that it is related to maximum log likelihood, relates the classification market to existing machine learning methods and further extends the idea to regression and density estimation. In addition, the results of empirical experiments are presented on a variety of UCI [25], LIAAD [49] and synthetic data to demonstrate the probability accuracy, prediction accuracy as compared to Random Forest [9] and Implicit Online Learning [32], and the loss function.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7461
 Format
 Thesis
 Title
 Sparse Motion Analysis.
 Creator

Ding, Liangjing, Barbu, Adrian, MeyerBaese, Anke, Liu, Xiuwen, Slice, Dennis, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

Motion segmentation is an essential preprocessing task in many computer vision problems. In this dissertation, the motion segmentation problem is studied and analyzed. At first, we establish a framework for the accurate evaluation of the motion field produced by different algorithms. Based on the framework, we introduce a feature tracking algorithm based on RankBoost which automatically prunes bad trajectories. The algorithm is observed to outperform many feature trackers using different...
Show moreMotion segmentation is an essential preprocessing task in many computer vision problems. In this dissertation, the motion segmentation problem is studied and analyzed. At first, we establish a framework for the accurate evaluation of the motion field produced by different algorithms. Based on the framework, we introduce a feature tracking algorithm based on RankBoost which automatically prunes bad trajectories. The algorithm is observed to outperform many feature trackers using different measures. Second, we develop three different motion segmentation algorithms. The first algorithm is based on spectral clustering. The affinity matrix is built from the angular information between different trajectories. We also propose a metric to select the best dimension of the lower dimensional space onto which the trajectories are projected. The second algorithm is based on learning. Using training examples, it obtains a ranking function to evaluate and compare a number of motion segmentations generated by different algorithms and pick the best one. The third algorithm is based on energy minimization using the SwendsenWang cut algorithm and the simulated annealing. It has a time complexity of $O(N^2)$, comparing to at least $O(N^3)$ for the spectral clustering based algorithms; also it could take generic forms of energy functions. We evaluate all three algorithms as well as several other stateofthe several other stateoftheart methods on a standard benchmark and show competitive performance.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7355
 Format
 Thesis
 Title
 Peridynamic Modeling and Simulation of PolymerNanotube Composites.
 Creator

Henke, Steven F., Shanbhag, Sachin, Okoli, Okenwa, Erlebacher, Gordon, Plewa, Tomasz, Oates, William, Department of Scientific Computing, Florida State University
 Abstract/Description

In this document, we develop and demonstrate a framework for simulating the mechanics of polymer materials that are reinforced by carbon nanotubes. Our model utilizes peridynamic theory to describe the mechanical response of the polymer and polymernanotube interfaces. We benefit from the continuum formulation used in peridynamics because (1) it allows the polymer material to be coarsegrained to the scale of the reinforcing nanofibers, and (2) failure via nanotube pullout and matrix tearing...
Show moreIn this document, we develop and demonstrate a framework for simulating the mechanics of polymer materials that are reinforced by carbon nanotubes. Our model utilizes peridynamic theory to describe the mechanical response of the polymer and polymernanotube interfaces. We benefit from the continuum formulation used in peridynamics because (1) it allows the polymer material to be coarsegrained to the scale of the reinforcing nanofibers, and (2) failure via nanotube pullout and matrix tearing are possible based on energetic considerations alone (i.e. without special treatment). To reduce the degrees of freedom that must be simulated, the reinforcement effect of the nanotubes is represented by a mesoscale beadspring model. This approach permits the arbitrary placement of reinforcement ''strands'' in the problem domain and motivates the need for irregular quadrature point distributions, which have not yet been explored in the peridynamic setting. We address this matter in detail and report on aspects of mesh sensitivity that we uncovered in peridynamic simulations. Using a manufactured solution, we study the effects of quadrature point placement on the accuracy of the solution scheme in one and two dimensions. We demonstrate that square grids and the generator points of a centroidal Voronoi tessellation (CVT) support solutions of similar accuracy, but CVT grids have desirable characteristics that may justify the additional computational cost required for their construction. Impact simulations provide evidence that CVT grids support fracture patterns that resemble those obtained on higher resolution cubic Cartesian grids with a reduced computational burden. With the efficacy of irregular meshing schemes established, we exercise our model by dynamically stretching a cylindrical specimen composed of the polymernanotube composite. We vary the number of reinforcements, alignment of the filler, and the properties of the polymernanotube interface. Our results suggest that enhanced reinforcement requires an interfacial stiffness that exceeds that of the neat polymer. We confirm that the reinforcement is most effective when a nanofiber is aligned with the applied deformation, least effective when a nanofiber is aligned transverse to the applied deformation, and achieves intermediate values for other orientations. Sample configurations containing two fibers are also investigated.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8566
 Format
 Thesis
 Title
 The Integration of Artificial Neural Networks and Geometric Morphometrics to Classify Teeth from Carcharhinus Sp.
 Creator

Soda, K. James, Slice, Dennis E., MeyerBaese, Anke, Department of Scientific Computing, Florida State University
 Abstract/Description

The advent of geometric morphometrics and the revitalization of artificial neural networks have created powerful new tools to classify morphological structures to groups. Although these two approaches have already been combined, there has been less attention on how such combinations perform relative to more traditional methods. Here we use geometric morphometric data and neural networks to identify from which species upperjaw teeth from carcharhiniform sharks in the genus Carcharhinus...
Show moreThe advent of geometric morphometrics and the revitalization of artificial neural networks have created powerful new tools to classify morphological structures to groups. Although these two approaches have already been combined, there has been less attention on how such combinations perform relative to more traditional methods. Here we use geometric morphometric data and neural networks to identify from which species upperjaw teeth from carcharhiniform sharks in the genus Carcharhinus originated, and these results are compared to more traditional classification methods. In addition to the methodological applications of this comparison, an ability to identify shark teeth would facilitate the incorporation of shark teeth's vast fossil record into evolutionary studies. Using geometric morphometric data originating from Naylor and Marcus (1994), we built two types of neural networks, multilayer perceptrons and radial basis function neural networks to classify teeth from C. acronotus, C. leucas, C. limbatus, and C. plumbeus, as well as classifying the teeth using linear discriminate analysis. All classification schemes were trained using the right upperjaw teeth of 15 individuals. Between these three methods, the multilayer perceptron performed the best, followed by linear discriminate analysis, and then the radial basis function neural network. All three classification systems appear to be more accurate than previous efforts to classify Carcharhinus teeth using linear distances between landmarks and linear discriminate analysis. In all three classification systems, misclassified teeth tended to originate either near the symphysis or near the jaw angle, though an additional peak occurred between these two structures. To assess whether smaller training sets would lead to comparable accuracies, we used a multilayer perceptron to classify teeth from the same species but now based on a training set of right upperjaw teeth from only five individuals. Although not as accurate as the network based on 15 individuals, the network performed favorably. As a final test, we built a multilayer perceptron to classify teeth from C. altimus, C. obscurus, and C. plumbeus, which have more similar upperjaw teeth than the original four species, based on training sets of five individuals. Again, the classification system performed better than a system that combines linear measurements and discriminate function analysis. Given the high accuracies for all three systems, it appears that the use of geometric morphometric data has a great impact on the accuracy of the classification system, whereas the exact method of classification tends to make less of a difference. These results may be applicable to other systems and other morphological structures.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8640
 Format
 Thesis
 Title
 MultiGPU Solutions of Geophysical PDEs with Radial Basis FunctionGenerated Finite Differences.
 Creator

Bollig, Evan F., Erlebacher, Gordon, Sussman, Mark, Flyer, Natasha, Slice, Dennis, Ye, Ming, Peterson, Janet, Department of Scientific Computing, Florida State University
 Abstract/Description

Many numerical methods based on Radial Basis Functions (RBFs) are gaining popularity in the geosciences due to their competitive accuracy, functionality on unstructured meshes, and natural extension into higher dimensions. One method in particular, the Radial Basis Functiongenerated Finite Differences (RBFFD), is drawing attention due to its comparatively low computational complexity versus other RBF methods, highorder accuracy (6th to 10th order is common), and parallel nature. Similar to...
Show moreMany numerical methods based on Radial Basis Functions (RBFs) are gaining popularity in the geosciences due to their competitive accuracy, functionality on unstructured meshes, and natural extension into higher dimensions. One method in particular, the Radial Basis Functiongenerated Finite Differences (RBFFD), is drawing attention due to its comparatively low computational complexity versus other RBF methods, highorder accuracy (6th to 10th order is common), and parallel nature. Similar to classical Finite Differences (FD), RBFFD computes weighted differences of stencil node values to approximate derivatives at stencil centers. The method differs from classical FD in that the test functions used to calculate the differentiation weights arendimensional RBFs rather than onedimensional polynomials. This allows for generalization tondimensional space on completely scattered node layouts. Although RBFFD was first proposed nearly a decade ago, it is only now gaining a critical mass to compete against well known competitors in modeling like FD, Finite Volume and Finite Element. To truly contend, RBFFD must transition from single threaded MATLAB environments to largescale parallel architectures. Many HPC systems around the world have made the transition to Graphics Processing Unit (GPU) accelerators as a solution for added parallelism and higher throughput. Some systems offer significantly more GPUs than CPUs. As the problem size,N, grows larger, it behooves us to work on parallel architectures, be it CPUs or GPUs. In addition to demonstrating the ability to scale to hundreds or thousands of compute nodes, this work introduces parallelization strategies that span RBFFD across multiGPU clusters. The stability and accuracy of the parallel implementation is verified through the explicit solution of two PDEs. Additionally, a parallel implementation for implicit solutions is introduced as part of continued research efforts. This work establishes RBFFD as a contender in the arena of distributed HPC numerical methods.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8531
 Format
 Thesis
 Title
 Objective Front Detection from Ocean Color Data.
 Creator

Crock, Nathan, Erlebacher, Gordon, Chassignet, Eric, Ye, Ming, MeyerBaese, Anke, Department of Scientific Computing, Florida State University
 Abstract/Description

We outline a new approach to objectively locate and define mesoscale oceanic features from satellite derived ocean color data. Modern edge detection algorithms are robust and accurate for most applications, oceanic satellite observations however introduce challenges that foil many differentiation based algorithms. The clouds, discontinuities, noise, and low variability of pertinent data prove confounding. In this work the input data is first quantized using a centroidal voronoi tesselation ...
Show moreWe outline a new approach to objectively locate and define mesoscale oceanic features from satellite derived ocean color data. Modern edge detection algorithms are robust and accurate for most applications, oceanic satellite observations however introduce challenges that foil many differentiation based algorithms. The clouds, discontinuities, noise, and low variability of pertinent data prove confounding. In this work the input data is first quantized using a centroidal voronoi tesselation (CVT), removing noise and revealing the low variable fronts of interest. Clouds are then removed by assuming values of its surrounding neighbors, and the perimeters of these resulting cloudless regions localize the fronts to a small set. We then use the gradient of the quantized data as a compass to walk around the front and periodically select points to be knots for a Hermite spline. These Hermite splines yield an analytic representation of the fronts and provide practitioners with a convenient tool to calibrate their models.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8544
 Format
 Thesis
 Title
 Reduced Order Modeling Using the WaveletGalerkin Approximation of Differential Equations.
 Creator

Witman, David, Peterson, Janet, Gunzburger, Max, Ye, Ming, Department of Scientific Computing, Florida State University
 Abstract/Description

Over the past few decades an increased interest in reduced order modeling approaches has led to its application in areas such as real time simulations and parameter studies among many others. In the context of this work reduced order modeling seeks to solve differential equations using substantially fewer degrees of freedom compared to a standard approach like the finite element method. The finite element method is a Galerkin method which typically uses piecewise polynomial functions to...
Show moreOver the past few decades an increased interest in reduced order modeling approaches has led to its application in areas such as real time simulations and parameter studies among many others. In the context of this work reduced order modeling seeks to solve differential equations using substantially fewer degrees of freedom compared to a standard approach like the finite element method. The finite element method is a Galerkin method which typically uses piecewise polynomial functions to approximate the solution of a differential equation. Wavelet functions have recently become a relevant topic in the area of computational science due to their attractive properties including differentiability and multiresolution. This research seeks to combine a waveletGalerkin method with a reduced order approach to approximate the solution to a differential equation with a given set of parameters. This work will focus on showing that using a reduced order approach in a waveletGalerkin setting is a viable option in determining a reduced order solution to a differential equation.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8663
 Format
 Thesis
 Title
 Toward Connecting CoreCollapse Supernova Theory with Observations.
 Creator

Handy, Timothy A., Plewa, Tomasz, Sussman, Mark, MeyerBaese, Anke, Erlebacher, Gordon, Navon, Ionel M., Department of Scientific Computing, Florida State University
 Abstract/Description

We study the evolution of the collapsing core of a 15 solar mass blue supergiant supernova progenitor from the moment shortly after core bounce until 1.5 seconds later. We present a sample of two and threedimensional hydrodynamic models parameterized to match the explosion energetics of supernova SN 1987A. We focus on the characteristics of the flow inside the gain region and the interplay between hydrodynamics, selfgravity, and neutrino heating, taking into account uncertainty in the...
Show moreWe study the evolution of the collapsing core of a 15 solar mass blue supergiant supernova progenitor from the moment shortly after core bounce until 1.5 seconds later. We present a sample of two and threedimensional hydrodynamic models parameterized to match the explosion energetics of supernova SN 1987A. We focus on the characteristics of the flow inside the gain region and the interplay between hydrodynamics, selfgravity, and neutrino heating, taking into account uncertainty in the nuclear equation of state. We characterize the evolution and structure of the flow behind the shock in terms the accretion flow dynamics, shock perturbations, energy transport and neutrino heating effects, and convective and turbulent motions. We also analyze information provided by particle tracers embedded in the flow. Our models are computed with a highresolution finite volume shock capturing hydrodynamic code. The code includes source terms due to neutrinomatter interactions from a lightbulb neutrino scheme that is used to prescribe the luminosities and energies of the neutrinos emerging from the core of the protoneutron star. The protoneutron star is excised from the computational domain, and its contraction is modeled by a timedependent inner boundary condition. We find the spatial dimensionality of the models to be an important contributing factor in the explosion process. Compared to twodimensional simulations, our threedimensional models require lower neutrino luminosities to produce equally energetic explosions. We estimate that the convective engine in our models is $4$% more efficient in three dimensions than in two dimensions. We propose that this is due to the difference of morphology of convection between two and threedimensional models. Specifically, the greater efficiency of the convective engine found in threedimensional simulations might be due to the larger surfacetovolume ratio of convective plumes, which aids in distributing energy deposited by neutrinos. We do not find evidence of the standing accretion shock instability in our models. Instead we identify a relatively long phase of quasisteady convection below the shock, driven by neutrino heating. During this phase, the analysis of the energy transport in the postshock region reveals characteristics closely resembling that of penetrative convection. We find that the flow structure grows from small scales and organizes into large, convective plumes on the size of the gain region. We use tracer particles to study the flow properties, and find substantial differences in residency times of fluid elements in the gain region between twodimensional and threedimensional models. These appear to originate at the base of the gain region and are due to differences in the structure of convection. We also identify differences in the evolution of energy of the fluid elements, how they are heated by neutrinos, and how they become gravitationally unbound. In particular, at the time when the explosion commences, we find that the unbound material has relatively long residency times in twodimensional models, while in three dimensions a significant fraction of the explosion energy is carried by particles with relatively short residency times. We conduct a series of numerical experiments in which we methodically decrease the angular resolution in our threedimensional models. We observe that the explosion energy decreases dramatically once the resolution is inadequate to capture the morphology of convection on large scales. Thus, we demonstrated that it is possible to connect successful, energetic, threedimensional models with unsuccessful threedimensional models just by decreasing numerical resolution, and thus the amount of resolved physics. This example shows that the role of dimensionality is secondary to correctly accounting for the basic physics of the explosion. The relatively low spatial resolution of current threedimensional models allows for only rudimentary insights into the role of turbulence in driving the explosion. However, and contrary to some recent reports, we do not find evidence for turbulence being a key factor in reviving the stalled supernova shock.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8798
 Format
 Thesis
 Title
 Binary White Dwarf Mergers: Weak Evidence for Prompt Detonations in HighResolution Adaptive Mesh Simulations.
 Creator

Fenn, Daniel, Plewa, Tomasz, Sussman, Mark, Erlebacher, Gordon, Department of Scientific Computing, Florida State University
 Abstract/Description

The origins of thermonuclear supernovae remain poorly understooda troubling fact, given their importance in astrophysics and cosmology. A leading theory posits that these events arise from the merger of white dwarfs in a close binary system. In this study we examine the possibility of prompt ignition, in which a runaway fusion reaction is initiated in the early stages of the merger. We present a set of threedimensional white dwarf merger simulations performed with the help of a high...
Show moreThe origins of thermonuclear supernovae remain poorly understooda troubling fact, given their importance in astrophysics and cosmology. A leading theory posits that these events arise from the merger of white dwarfs in a close binary system. In this study we examine the possibility of prompt ignition, in which a runaway fusion reaction is initiated in the early stages of the merger. We present a set of threedimensional white dwarf merger simulations performed with the help of a highresolution adaptive mesh refinement hydrocode. We consider three binary systems of different mass ratios composed of carbon/oxygen white dwarfs with total mass exceeding the Chandrasekhar mass. We additionally explore the effects of mesh resolution on important simulation parameters. We find that two distinct behaviors emerge depending on the progenitor mass ratio. For systems of components with differing masses, a boundary layer forms around the accretor. For systems of nearly equal mass, the merger product displays deep entraintment of each star into the other. We closely monitor thermonuclear burning that begins when sufficiently dense material is shocked during early stages of the merger process. Analysis of ignition times lead us to conclude that for binary systems with components of unequal mass whose combined mass is close to the Chandrasekhar limit, there is a negligible chance of prompt ignition. Simulations of similar systems with a combined mass of 2 solar masses suggest that prompt ignition may be possible, but require further study using higherresolution. The system with components of nearly equal mass does not seem likely to undergo prompt ignition, and higher resolution simulations are unlikely to change this conclusion. We additionally find that white dwarf merger simulations require high resolution. Insufficient resolution can qualitatively change simulation outcomes, either by smoothing important fluctuations in density and temperature, or by altering the dynamics of the system such that additional physics processes, such as gravity, are incorrectly represented.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8779
 Format
 Thesis
 Title
 Bayesian Neural Networks in DataIntensive High Energy Physics Applications.
 Creator

Perry, Michelle, MeyerBaese, Anke, Prosper, Harrison, Piekarewicz, Jorge, Shanbhag, Sachin, Beerli, Peter, Department of Scientific Computing, Florida State University
 Abstract/Description

This dissertation studies a graphical processing unit (GPU) construction of Bayesian neural networks (BNNs) using large training data sets. The goal is to create a program for the mapping of phenomenological Minimal Supersymmetric Standard Model (pMSSM) parameters to their predictions. This would allow for a more robust method of studying the Minimal Supersymmetric Standard Model, which is of much interest at the Large Hadron Collider (LHC) experiment CERN. A systematic study of the speedup...
Show moreThis dissertation studies a graphical processing unit (GPU) construction of Bayesian neural networks (BNNs) using large training data sets. The goal is to create a program for the mapping of phenomenological Minimal Supersymmetric Standard Model (pMSSM) parameters to their predictions. This would allow for a more robust method of studying the Minimal Supersymmetric Standard Model, which is of much interest at the Large Hadron Collider (LHC) experiment CERN. A systematic study of the speedup achieved in the GPU application compared to a Central Processing Unit (CPU) implementation are presented.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8867
 Format
 Thesis
 Title
 Improving Inference in Population Genetics Using Statistics.
 Creator

Palczewski, Michal, Beerli, Peter, Srivastava, Anuj, Erlebacher, Gordon, Lemmon, Alan, Slice, Dennis, Department of Scientific Computing, Florida State University
 Abstract/Description

My studies at Florida State University focused on using computers and statistics to solve problems in population genetics. I have created models and algorithms that have the potential to improve the statistical analysis of population genetics. Population genetical data is often noisy and thus requires the use of statistics in order to be able to draw meaning from the data. This dissertation consists of three main projects. The first project involves the parallel evaluation an model inference...
Show moreMy studies at Florida State University focused on using computers and statistics to solve problems in population genetics. I have created models and algorithms that have the potential to improve the statistical analysis of population genetics. Population genetical data is often noisy and thus requires the use of statistics in order to be able to draw meaning from the data. This dissertation consists of three main projects. The first project involves the parallel evaluation an model inference on multilocus data sets. Bayes factors are used for model selection. We used thermodynamic integration to calculate these Bayes factors. To be able to take advantage of parallel processing and parallelize calculation across a high performance computer cluster, I developed a new method to split the Bayes factor calculation into independent units and then combine them later. The next project, the Transition Probability Structured Coalescence [TSPC], involved the creation of a continuous approximation to the discrete migration process used in the structured coalescent that is commonly used to infer migration rates in biological populations. Previous methods required the simulation of these migration events, but there is little power to estimate the time and occurrence of these events. In my method, they are replaced with a one dimensional numerical integration. The third project involved the development of a model for the inference of the time of speciation. Previous models used a set time to delineate a speciation and speciation was a point process. Instead, this point process is replaced with a parameterized speciation model where each lineage speciates according to a parameterized distribution. This is effectively a broader model that allows both very quick and slow speciation. It also includes the previous model as a limiting case. These three project, although rather independent of each other, improve the inference of population genetic models and thus allow better analyses of genetic data in fields such as phylogeography, conservation, and epidemiology.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7540
 Format
 Thesis
 Title
 IrradiationInduced Composition Patterns and Segregation in Binary Solid Solutions.
 Creator

Dubey, Santosh, Azab, Anter El, Rikvold, Per Arne, Shanbhag, Sachin, Erlebacher, Gordon, Plewa, Tomasz, Department of Scientific Computing, Florida State University
 Abstract/Description

A theoreticalcomputational model is developed to study irradiationinduced composition patterns and segregation in binary solid solutions under irradiation, which is motivated by the fact that such composition changes alter a wide range of metallurgical properties of structural alloys used in the nuclear industry. For a binary alloy system, the model is based on a coupled, nonlinear set of reactiondiffusion equations for six defect and atomic species, which include vacancies, three...
Show moreA theoreticalcomputational model is developed to study irradiationinduced composition patterns and segregation in binary solid solutions under irradiation, which is motivated by the fact that such composition changes alter a wide range of metallurgical properties of structural alloys used in the nuclear industry. For a binary alloy system, the model is based on a coupled, nonlinear set of reactiondiffusion equations for six defect and atomic species, which include vacancies, three interstitial dumbbell configurations, and the two alloy elements. Two sets of boundary conditions have been considered: periodic boundary conditions, which are used to investigate composition patterning in bulk alloys under irradiation, and reaction boundary conditions to study the radiationinduced segregation at surfaces. Reactions are considered to be either between defects, which is called recombination, or between defects and alloying elements, which result in change in the interstitial dumbbell type. Long range diffusion of all the species is considered to happen by vacancy and interstitialcy mechanisms. As such, diffusion of the alloy elements is coupled to the diffusion of vacancies and interstitials. Defect generation is considered to be associated with collision cascade events that occur randomly in space and time. Each event brings about a change in the local concentration of all the species over the mesoscale material volume affected by the cascade. Stifflystable Gear's method has been implemented to solve the reactiondiffusion model numerically. Gear's method is a variant of higher order implicit linear multistep method, implemented in predictorcorrector fashion. The resulting model has been tested with a miscible CuAu solid solution. For this alloy, and in the absence of boundaries, steady state composition patterns of several nanometers have been observed. Fourier space properties of these patterns have been found to depend on irradiationspecific control parameters, temperature, and initial state of the alloy. Linear stability analysis of the set of reactiondiffusion equations confirms the findings of the numerical simulations. In the presence of boundaries, radiationinduced segregation of alloying species has been observed near in the boundary layer: enrichment of faster diffusing species and depletion of slower diffusing species. Radiationinduced segregation has also been found to depend upon irradiationspecific control parameters and temperature. The results show that the degree of segregation is spatially nonuniform and hence it should be studied in higher dimensions. Proper formulation of the boundary conditions showed that segregation of the alloy elements to the boundary is coupled to the boundary motion. With both patterning and segregation investigations, the irradiated sample has been found to recover its uniform state with time when irradiation is turned off. The inference drawn out from this observation is that in miscible solid solutions irradiationinduced composition patterning and radiationinduced segregation are not realizable in the absence of irradiation.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5601
 Format
 Thesis
 Title
 Parametric Uncertainty Analysis of Uranium Transport Surface Complexation Models.
 Creator

Miller, Geoffery L., Ye, Ming, Van Engelen, Robert, Plewa, Tomasz, Department of Scientific Computing, Florida State University
 Abstract/Description

Parametric uncertainty analysis of surface complexation modeling (SCM) has been studied using linear and nonlinear analysis. A computational SCM model was developed by Kohler et al. (1996) to simulate the breakthrough of Uranium(VI) in a column of quartz. Calibration of parameters which describe the reactions involved during reactivetransport simulation has been found to fit experimental data well. Further uncertainty analysis has been conducted which determines the predictive capability of...
Show moreParametric uncertainty analysis of surface complexation modeling (SCM) has been studied using linear and nonlinear analysis. A computational SCM model was developed by Kohler et al. (1996) to simulate the breakthrough of Uranium(VI) in a column of quartz. Calibration of parameters which describe the reactions involved during reactivetransport simulation has been found to fit experimental data well. Further uncertainty analysis has been conducted which determines the predictive capability of these models. It was concluded that nonlinear analysis results in a more accurate prediction interval coverage than linear analysis. An assumption made by both linear and nonlinear analysis is that the parameters follow a normal distribution. In a preliminary study, when using Monte Carlo sampling a uniform distribution among a known feasible parameter range, the model exhibits no predictive capability. Due to high parameter sensitivity, few realizations exhibit accuracy to the known data. This results in a high confidence of the calibrated parameters, but poor understanding of the parametric distributions. This study first calibrates these parameters using a global optimization technique, multistart quasinewton BFGS method. Second, a Morris method (MOAT) analysis is used to screen parametric sensitivity. It is seen from MOAT that all parameters exhibit nonlinear effects on the simulation. To achieve an approximation of the simulated behavior of SCM parameters without the assumption of a normal distribution, this study employs the use of a CovarianceAdaptive Monte Carlo Markov chain algorithm. It is seen from posterior distributions generated from accepted parameter sets that the parameters do not necessarily follow a normal distribution. Likelihood surfaces confirm the calibration of the models, but shows that responses to parameters are complex. This complex surface is due to a nonlinear model and high correlations between parameters. The posterior parameter distributions are then used to find prediction intervals about an experiment not used to calibrate the model. The predictive capability of Adaptive MCMC is found to be better than that of linear and nonlinear analysis, showing a better understanding of parametric uncertainty than previous study.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd2410
 Format
 Thesis
 Title
 Generalizes Procrustes Surface Analysis: A LandmarkFree Approach to Superimposition and Shape Analysis.
 Creator

Pomidor, Benjamin, Slice, Dennis, Beerli, Peter, Shanbhag, Sachin, Department of Scientific Computing, Florida State University
 Abstract/Description

The tools and techniques used in shape analysis have constantly evolved, but their objective remains fixed: to quantify the differences in shape between two objects in a consistent and meaningful manner. The handmeasurements of calipers and protractors of the past have yielded to laser scanners and landmarkplacement software, but the process still involves transforming an object's physical shape into a concise set of numerical data that can be readily analyzed by mathematical means [Rohlf...
Show moreThe tools and techniques used in shape analysis have constantly evolved, but their objective remains fixed: to quantify the differences in shape between two objects in a consistent and meaningful manner. The handmeasurements of calipers and protractors of the past have yielded to laser scanners and landmarkplacement software, but the process still involves transforming an object's physical shape into a concise set of numerical data that can be readily analyzed by mathematical means [Rohlf 1993]. In this paper, we present a new method to perform this transformation by taking full advantage of today's highpower computers and highresolution scanning technology. This method uses surface scans to calculate a shapedifference metric and perform superimposition rather than relying on carefully (and tediously) placed manual landmarks. This is accomplished by building upon and extending the Iterative Closest Point algorithm. We also examine some new ways this data may be used; we can, for example, calculate an averaged surface directly and visualize pointwise shape information over this surface. Finally, we demonstrate the use of this method on a set of primate skulls and compare the results of the new methodology with traditional geometric morphometric analysis.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8714
 Format
 Thesis
 Title
 The Solution of a Burgers' Equation Inverse Problem with ReducedOrder Modeling Proper Orthogonal Decomposition.
 Creator

Steward, Jeff, Navon, Ionel M., Gunzburger, Max, Erlebacher, Gordon, Department of Scientific Computing, Florida State University
 Abstract/Description

This thesis presents and evaluates methods for solving the 1D viscous Burgers' partial differential equation with finite difference, finite element, and proper orthogonal decomposition (POD) methods in the context of an optimal control inverse problem. Based on downstream observations, the initial conditions that optimize a lackoffit cost functional are reconstructed for a variety of different Reynolds numbers. For moderate Reynolds numbers, our POD method proves to be not only fast and...
Show moreThis thesis presents and evaluates methods for solving the 1D viscous Burgers' partial differential equation with finite difference, finite element, and proper orthogonal decomposition (POD) methods in the context of an optimal control inverse problem. Based on downstream observations, the initial conditions that optimize a lackoffit cost functional are reconstructed for a variety of different Reynolds numbers. For moderate Reynolds numbers, our POD method proves to be not only fast and accurate, it also demonstrates a regularizing effect on the inverse problem.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd0393
 Format
 Thesis
 Title
 Characterization of MetalloceneCatalyzed Polyethylenes from Rheological Measurements Using a Bayesian Formulation.
 Creator

Takeh, Arsia, Shanbhag, Sachin, ElAzab, Anter, Beerli, Peter, Department of Scientific Computing, Florida State University
 Abstract/Description

Longchain branching affects the rheological properties of the polyethylenes strongly. Branching structure  density of branch points, branch length, and the locations of the branches  is complicated, therefore, without controlled branching structure it is almost impossible to study the effect of longchain branching on the rheological properties. Singlesite catalysts now make it possible to prepare samples in which the molecular weight distribution is relatively narrow and quite...
Show moreLongchain branching affects the rheological properties of the polyethylenes strongly. Branching structure  density of branch points, branch length, and the locations of the branches  is complicated, therefore, without controlled branching structure it is almost impossible to study the effect of longchain branching on the rheological properties. Singlesite catalysts now make it possible to prepare samples in which the molecular weight distribution is relatively narrow and quite reproducible. In addition, a particular type of singlesite catalyst, the constrained geometry catalyst, makes it possible to introduce low and wellcontrolled levels of long chain branching while keeping the molecular weight distribution narrow. Linear viscoelastic properties (LVE) of rheological properties contain a rich amount of data regarding molecular structure of the polymers. A computational algorithm that seeks to invert the linear viscoelastic spectrum of singlesite metallocenecatalyzed polyethylenes is presented in this work. The algorithm uses a general linear rheological model of branched polymers as its underlying engine, and is based on a Bayesian formulation that transforms the inverse problem into a sampling problem. Given experimental rheological data on unknown singlesite metallocenecatalyzed polyethylenes, it is able to quantitatively describe the range of values of weightaveraged molecular weight, MW, and average branching density, bm, consistent with the data. The algorithm uses a Markovchain Monte Carlo method to simulate the sampling problem. If, and when information about the molecular weight is available through supplementary experiments, such as chromatography or light scattering, it can easily be incorporated into the algorithm, as demonstrated.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1729
 Format
 Thesis
 Title
 EdgeWeighted Centroidal Voronoi Tessellation Based Algorithms for Image Segmentation.
 Creator

Wang, Jie, Wang, Xiaoqiang, Wang, Xiaoming, Gunzburger, Max, Peterson, Janet, ElAzab, Anter, Department of Scientific Computing, Florida State University
 Abstract/Description

Centroidal Voronoi tessellations (CVTs) are special Voronoi tessellations whose generators are also the centers of mass (centroids) of the Voronoi regions with respect to a given density function. CVTbased algorithms have been proved very useful in the context of image processing. However when dealing with the image segmentation problems, classic CVT algorithms are sensitive to noise. In order to overcome this limitation, we develop an edgeweighted centroidal Voronoi Tessellation (EWCVT)...
Show moreCentroidal Voronoi tessellations (CVTs) are special Voronoi tessellations whose generators are also the centers of mass (centroids) of the Voronoi regions with respect to a given density function. CVTbased algorithms have been proved very useful in the context of image processing. However when dealing with the image segmentation problems, classic CVT algorithms are sensitive to noise. In order to overcome this limitation, we develop an edgeweighted centroidal Voronoi Tessellation (EWCVT) model by introducing a new energy term related to the boundary length which is called "edge energy". The incorporation of the edge energy is equivalent to add certain form of compactness constraint in the physical space. With this compactness constraint, we can effectively control the smoothness of the clusters' boundaries. We will provide some numerical examples to demonstrate the effectiveness, efficiency, flexibility and robustness of EWCVT. Because of its simplicity and flexibility, we can easily embed other mechanisms with EWCVT to tackle more sophisticated problems. Two models based on EWCVT are developed and discussed. The first one is "local variation and edgeweighted centroidal Voronoi Tessellation" (LVEWCVT) model by encoding the information of local variation of colors. For the classic CVTs or its generalizations (like EWCVT), pixels inside a cluster share the same centroid. Therefore the set of centroids can be viewed as a piecewise constant function over the computational domain. And the resulting segmentation have to be roughly the same with respect to the corresponding centroids. Inspired by this observation, we propose to calculate the centroids for each pixel separately and locally. This scheme greatly improves the algorithms' tolerance of withincluster feature variations. By extensive numerical examples and quantitative evaluations, we demonstrate the excellent performance of LVEWCVT method compared with several stateofart algorithms. LVEWCVT model is especially suitable for detection of inhomogeneous targets with distinct color distributions and textures. Based on EWCVT, we build another model for "Superpixels" which is in fact a "regularization" of highly inhomogeneous images. We call our algorithm for superpixels as "VCells" which is the abbreviation of "Voronoi cells". For a wide range of images, VCells is capable to generate roughly uniform subregions and meanwhile nicely preserves local image boundaries. The undersegmentation error is effectively limited in a controllable manner. Moreover, VCells is very efficient. The computational cost is roughly linear in image size with small constant coefficient. For megapixel sized images, VCells is able to generate very dense superpixels in a matter of seconds. We demonstrate that VCells outperforms several stateofart algorithms through extensive qualitative and quantitative results on a wide range of complex images. Another important contribution of this work is the "DetectingSegmentBreaking" (DSB) algorithm which can be used to guarantee the spatial connectedness of resulting segments generated by CVT based algorithms. Since the metric is usually defined on the color space, the resulting segments by CVT based algorithms are not necessarily spatially connected. For some applications, this feature is useful and conceptually meaningful, e.g., the foreground objects are not spatially connected. But for some other applications, like the superpixel problem, this "good" feature becomes unacceptable. By simple "extractingconnectedcomponent" and "relabeling" schemes, DSB successfully overcomes the above difficulty. Moreover, the computational cost of DSB is roughly linear in image size with a small constant coefficient. From the theoretical perspective, the innovative idea of EWCVT greatly enriches the methodology of CVTs. (The idea of EWCVT has already been used for variational curve smoothing and reconstruction problems.) For applications, this work shows the great power of EWCVT for image segmentation related problems.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1244
 Format
 Thesis
 Title
 Toolkits for Automatic Web Service and Graphic User Interface Generation.
 Creator

Qu, Yenan, Erlebacher, Gordon, Ye, Ming, Wang, Xiaoqiang, Department of Scientific Computing, Florida State University
 Abstract/Description

Over the past decade, Web Services have played a prominent role in the Internet area and in the business world. My interest is focused on developing the toolkits for automatic web service and graphical user interface (GUI) generation, KWATT. The standalone KWATT service generator(KSG) is a C++ application that generates web services from Tcl, Python, and Ruby scripts uploaded by end user with KGT(Kwatt Gui Tools), with minimal user intervention. KSG Parser parses the scripts and extracts...
Show moreOver the past decade, Web Services have played a prominent role in the Internet area and in the business world. My interest is focused on developing the toolkits for automatic web service and graphical user interface (GUI) generation, KWATT. The standalone KWATT service generator(KSG) is a C++ application that generates web services from Tcl, Python, and Ruby scripts uploaded by end user with KGT(Kwatt Gui Tools), with minimal user intervention. KSG Parser parses the scripts and extracts information about procedures and userdefined control statements, embedded as comments. The KSG creates all necessary C++ wrappers, along with the code stubs required by gSOAP, a C++ interface to the SOAP protocol. Initially conceived to translate VTK frontend Tcl scripts into Web Services, the architecture is sufficiently general to accommodate a wide range of input languages. The work is extanded by considering the automatic creation of graphical user interfaces to allow interaction between an end user and the web service generated by the KSG. Kwatt GUI Generator(KGG) was developed to achieve this. The KGG is a web service that runs inside a service of Javabased open source, and it performs four major steps of GUI generation. First, the KGG receives the scripts from KGT (KWATT GUI Tools) after the corresponding web service generated successfully. Comment lines inserted into the scripts provide hints to the XML generator about the interface widgets. Second, the structure of the GUI is encoded into an XML file by parsing those scripts with the XML generator. Third, the KGG extracts information from the generated XML file, then passes them to a plugin. Finally, the plugin generates the corresponding language user interface that is sent back to the user by the KGG.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd2239
 Format
 Thesis
 Title
 Inverse Problems in Polymer Characterization.
 Creator

Takeh, Arsia, Shanbhag, Sachin, Oates, William, MeyerBaese, Anke, Beerli, Peter, Wilgenbusch, Jim, Department of Scientific Computing, Florida State University
 Abstract/Description

This work implements inverse methods in various polymer characterization problems. In the first topic, a new approach is proposed to infer the comonomer content using Crystaf method considering and quantifying the associated uncertainty. In the second topic, a comparison is carried out between various rheological probes (methods) to determine their sensitivity in longchain branching (LCB) detection and measurement. In the last topic, an opensource software is implemented to infer continuous...
Show moreThis work implements inverse methods in various polymer characterization problems. In the first topic, a new approach is proposed to infer the comonomer content using Crystaf method considering and quantifying the associated uncertainty. In the second topic, a comparison is carried out between various rheological probes (methods) to determine their sensitivity in longchain branching (LCB) detection and measurement. In the last topic, an opensource software is implemented to infer continuous and discrete relaxation modulus.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9104
 Format
 Thesis
 Title
 A Block Incremental Algorithm for Computing Dominant Singular Subspaces.
 Creator

Baker, Christopher Grover, Gallivan, Kyle, Srivastava, Anuj, Engelen, Robert van, Department of Computer Science, Florida State University
 Abstract/Description

This thesis presents and evaluates a generic algorithm for incrementally computing the dominant singular subspaces of a matrix. The relationship between the generality of the results and the necessary computation is explored. The performance of this method, both numerical and computational, is discussed in terms of the algorithmic parameters, such as block size and acceptance threshhold. Bounds on the error are presented along with a posteriori approximations of these bounds. Finally, a group...
Show moreThis thesis presents and evaluates a generic algorithm for incrementally computing the dominant singular subspaces of a matrix. The relationship between the generality of the results and the necessary computation is explored. The performance of this method, both numerical and computational, is discussed in terms of the algorithmic parameters, such as block size and acceptance threshhold. Bounds on the error are presented along with a posteriori approximations of these bounds. Finally, a group of methods are proposed which iteratively improve the accuracy of computed results and the quality of the bounds.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd0961
 Format
 Thesis
 Title
 Methods for Linear and Nonlinear Array Data Dependence Analysis with the Chains of Recurrences Algebra.
 Creator

Birch, Johnnie L., Van Engelen, Robert, Ruscher, Paul, Gallivan, Kyle, Whalley, David, Yuan, Xin, Department of Computer Science, Florida State University
 Abstract/Description

The presence of data dependences between statements in a loop iteration space imposes strict constraints on statement order and loop restructuring when preserving program semantics. A compiler determines the safe partial ordering of statements that enhance performance by explicitly disproving the presence of dependences. As a result, the false positive rate of a dependence analysis technique is a crucial factor in the effectiveness of a restructuring compiler's ability to optimize the...
Show moreThe presence of data dependences between statements in a loop iteration space imposes strict constraints on statement order and loop restructuring when preserving program semantics. A compiler determines the safe partial ordering of statements that enhance performance by explicitly disproving the presence of dependences. As a result, the false positive rate of a dependence analysis technique is a crucial factor in the effectiveness of a restructuring compiler's ability to optimize the execution of performancecritical code fragments. This dissertation investigates reducing the false positive rate by improving the accuracy of analysis methods for dependence problems and increasing the total number of problems analyzed. Fundamental to these improvements is the rephrasing of the dependence problem in terms of Chains of Recurrences (CR), a formalism that has been shown to be conducive to efficient loop induction variable analysis. An infrastructure utilizing CRanalysis methods and enhanced dependence testing techniques is developed and tested. Experimental results indicate capabilities of dependence analysis methods can be improved without a reduction in efficiency. This results in a reduction in the false positive rate and an increase in the number of optimized and parallelized code fragments.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3750
 Format
 Thesis