Current Search: Kercheval, Alec N. (x)
Search results
 Title
 Variance Gamma Pricing of American Futures Options.
 Creator

Yoo, Eunjoo, Nolder, Craig A., Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec N., Quine, Jack, Department of Mathematics, Florida State University
 Abstract/Description

In financial markets under uncertainty, the classical BlackScholes model cannot explain the empirical facts such as fat tails observed in the probability density. To overcome this drawback, during the last decade, Lévy process and stochastic volatility models were introduced to financial modeling. Today crude oil futures markets are highly volatile. It is the purpose of this dissertation to develop a mathematical framework in which American options on crude oil futures contracts are priced...
Show moreIn financial markets under uncertainty, the classical BlackScholes model cannot explain the empirical facts such as fat tails observed in the probability density. To overcome this drawback, during the last decade, Lévy process and stochastic volatility models were introduced to financial modeling. Today crude oil futures markets are highly volatile. It is the purpose of this dissertation to develop a mathematical framework in which American options on crude oil futures contracts are priced more effectively than by current methods. In this work, we use the Variance Gamma process to model the futures price process. To generate the underlying process, we use a random tress method so that we evaluate the option prices at each tree node. Through fifty replications of a random tree, the averaged value is taken as a true option price. Pricing performance using this method is accessed using American options on crude oil commodity contracts from December 2003 to November 2004. In comparison with the Variance Gamma model, we price using the BlackScholes model as well. Over the entire sample period, a positive skewness and high kurtosis, especially in the shortterm options, are observed. In terms of pricing errors, the Variance Gamma process performs better than the BlackScholes model for the American options on crude oil commodities.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0691
 Format
 Thesis
 Title
 On the Multidimensional Default Threshold Model for Credit Risk.
 Creator

Zhou, Chenchen, Kercheval, Alec N., Wu, Wei, Ökten, Giray, Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

This dissertation is based on the structural model framework for default risk that was first introduced by garreau2016structural (henceforth: the "GK model"). In this approach, the time of default is defined as the first time the logreturn of the firm's stock price jumps below a (possibly stochastic) "default threshold'' level. The stock price is assumed to follow an exponential L\'evy process and, in the multidimensional case, a multidimensional L\'evy process. This new structural model is...
Show moreThis dissertation is based on the structural model framework for default risk that was first introduced by garreau2016structural (henceforth: the "GK model"). In this approach, the time of default is defined as the first time the logreturn of the firm's stock price jumps below a (possibly stochastic) "default threshold'' level. The stock price is assumed to follow an exponential L\'evy process and, in the multidimensional case, a multidimensional L\'evy process. This new structural model is mathematically equivalent to an intensitybased model where the intensity is parameterized by a L\'evy measure. The dependence between the default times of firms within a basket is the result of the jump dependence of their respective stock prices and described by a L\'evy copula. To extend the previous work, we focus on generalizing the joint survival probability and related results to the ddimensional case. Using the link between L\'evy processes and multivariate exponential distributions, we derive the joint survival probability and characterize correlated default risk using L\'evy copulas. In addition, we extend our results to include stochastic interest rates. Moreover, we describe how to use the default threshold as the interface for incorporating additional exogenous economic factors, and still derive basket credit default swap (CDS) prices in terms of expectations. If we make some additional modeling assumptions such that the default intensities become affine processes, we obtain explicit formulas for the single name and firsttodefault (FtD) basket CDS prices, up to quadrature.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Zhou_fsu_0071E_14012
 Format
 Thesis
 Title
 Essays on Productivity, Labor Allocations, and Intangible Capital.
 Creator

Malik, Kashif Z. (Kashif Zaheer), Marquis, Milton H., Kercheval, Alec N., Norrbin, Stefan C., Beaumont, Paul M., Department of Economics, Florida State University
 Abstract/Description

The first essay conducts robustness analysis on Gali's (1999) results. Following Gali's identification strategy, the model is extended to the sectoral level within the private sector. The paper also looks at the two important breaks, 1973 recession and 1984beginning of the "great moderation". The private sector results suggest that nontechnology shocks are the major cause of business cycle fluctuation rather than technology shocks. Sectoral data also produced this conclusion with the...
Show moreThe first essay conducts robustness analysis on Gali's (1999) results. Following Gali's identification strategy, the model is extended to the sectoral level within the private sector. The paper also looks at the two important breaks, 1973 recession and 1984beginning of the "great moderation". The private sector results suggest that nontechnology shocks are the major cause of business cycle fluctuation rather than technology shocks. Sectoral data also produced this conclusion with the exception of one sector. Most of the results do not change for the pre and postrecession and great moderation dates. This essay reinforces the notion that technology shocks play a limited role in the aggregate shortrun fluctuations of business cycles. These results pose a challenge to modern real business cycle theory. The question does hours decline in response to a technology shock attracted a lot of research in the last decade. The second essay attempted to investigate the response to hours in a threevariableproductivity, hours and corporate profits model using vector autoregressive with longrun and shortrun restrictions. The model imposes three restric tions: technology shocks affect productivity permanently, hour's shock and profit shocks do not affect productivity in the longrun and profit shocks do not affect hours contempora neously. The results seemed to be more encouraging for real business cycle theory and are inconsistent with the conclusion that technology shocks play limited role in business cycle fluctuations. An important finding is that profits matter empirically since it changed the response to hours from a technology shock. By adding profits to the model, hours do not decline from a productivity shock. Though the initial impact is negative they recover in first quarter and they comove with productivity.The response to hours shock is however consistent with Gali (1999). Hours worked increase in response to a shock to employment. Recent empirical research argued that intangible capital has been playing an important role in explaining productivity gains in the last two decades. In the third essay, intangible capital is introduced in an otherwise standard real business cycle model. Firms expend resources to create intangible capital which is an additional input in the production func tion. Since firm's investment in intangible capital is procyclical it produces positive profits despite being a competitive firm. The firm increases investment in intangible capital from both temporary and permanent productivity shock. It also plays a significant role in pro ducing endogenous movement in productivity. Firms use more labor and physical capital to produce intangible capital since it raises productivity and future profits. However, there is a tradeoff between current period profits and investment in intangible capital. Perma nent technology shock results in higher factor share of labor and capital allocated to create intangible capital which decreases profits in the current period; however, higher investment in intangible capital would raise future profits.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd5012
 Format
 Thesis
 Title
 Risk Forecasting and Portfolio Optimization with GARCH, Skewed t Distributions and Multiple Timescales.
 Creator

Liu, Yang, Kercheval, Alec N., Schlagenhauf, Don E., Kim, Kyounghee, Nolder, Craig, Department of Mathematics, Florida State University
 Abstract/Description

It is wellestablished that distributions of financial returns are heavytailed and exhibit skewness and other nonGaussian characteristics. As time series, return data have volatilities that vary over time and show profound serial correlation (or crosscorrelation in the multivariate case). To address these issues, time series models such as GARCH (generalized autoregressive conditionally heteroskedastic) processes and nonGaussian distributions such as generalized hyperbolic (GH)...
Show moreIt is wellestablished that distributions of financial returns are heavytailed and exhibit skewness and other nonGaussian characteristics. As time series, return data have volatilities that vary over time and show profound serial correlation (or crosscorrelation in the multivariate case). To address these issues, time series models such as GARCH (generalized autoregressive conditionally heteroskedastic) processes and nonGaussian distributions such as generalized hyperbolic (GH) distributions have been introduced into financial modeling. A typical procedure featuring GARCH and nonGaussian distributions involves the following steps. First, filter data with GARCH to get residuals that are approximately i.i.d. Second, calibrate parameters of a nonGaussian distribution to those residuals. Finally, forecast various quantities based on knowledge of the calibrated distribution. Existing implementations of this procedure are fixedfrequency in nature. That is, all three steps are carried out on the same timescale. Reliable filtering and calibration requires a sufficient amount of historical data. As the forecast horizon grows, the model demands an increasingly long price history and may become infeasible if data are too scarce. To reduce the model's dependence on data availability, we propose a mixedfrequency method. Filtering and calibration are done on a relatively small timescale where data are more abundant. We then shift to a longer time horizon and make forecasts through aggregating GARCH processes and Monte Carlo simulation. We first apply this mixedfrequency approach to forecasting univariate valueatrisk (VaR) for stock index returns. Backtesting conducted on a variety of timescales shows that the method is indeed viable. Moreover, compared with the fixedfrequency method, our new method is able to produce VaR forecasts that respond more quickly to volatility changes. Therefore, even if data availability is not an issue, the mixedfrequency method is still a valuable alternative for risk managers. Portfolio optimization, a multivariate problem, is tackled next. We enhance traditional Markowitz optimization with expected shortfall (ES), which measures tail risks better than standard deviation, and skewed t distributions, a promising subfamily of GH distributions. The mixedfrequency idea is incorporated as well. Factors that affect the efficient frontier and optimal portfolio compositions are thoroughly discussed. Last but not least, we implement investment strategies based on GARCHskewed tES portfolio optimization and evaluate their performance, both in terms of return and risk.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4998
 Format
 Thesis
 Title
 Essays on Public Policy and Financial Economics from a Macroeconomics Perspective.
 Creator

Nguyen, Dung, Schlagenhaulf, Don, Kercheval, Alec N., Cobbe, James, Beaumont, Paul, Department of Economics, Florida State University
 Abstract/Description

This dissertation consists of three essays. The first two essays (i.e., Chapter 2 and Chapter 3) examine the effects of raising the retirement age on the life cycle behaviors of individuals and its implication on the social security budget. The third essay (i.e., Chapter 4) is an empirical study, which tests the hypothesis of investors' overreactions when trading neglected stocks. The first essay examines the impact of raising the retirement age on the saving and working behaviors of older...
Show moreThis dissertation consists of three essays. The first two essays (i.e., Chapter 2 and Chapter 3) examine the effects of raising the retirement age on the life cycle behaviors of individuals and its implication on the social security budget. The third essay (i.e., Chapter 4) is an empirical study, which tests the hypothesis of investors' overreactions when trading neglected stocks. The first essay examines the impact of raising the retirement age on the saving and working behaviors of older individuals, and the associated impact on the social security budget. Its results indicate that the reform would result in a 50% reduction in the social security budget deficit. In terms of behavioral responses, we find that: (1) individuals respond to the reform by saving more progressively during the period prior to retirement (i.e., from their early 40s to age 62), while supplying more working hours during the retirement period (i.e., ages 62 and older). The intensity of the saving and working hour responses critically depend on the assumption of the efficiency indexes of the elderly: the lower (higher) the efficiency index, the more intense the saving (working hours) response; (2) there is an upward shift in the working hour profile of individuals as a result of raising the retirement age. Once again, the distance of the shift increases with values of the elderly efficiency index; (3) we find a decrease in the participation rate of elderly individuals age 6280 in versions where the estimated efficiency index of the elderly is relatively low. The second essay focuses on examining the lifecycle behavior responses of individuals with different skill levels to the raising of the retirement age reform. We find that individuals with different educational attainment respond differently to the reform. Specifically, individuals with lowerthanaverage education respond to the policy change with a significant upward shift in the working hour profile, a higher participation rate, and an aggressive retirement saving motive. On the other hand, individuals with higherthanaverage education mainly deal with the policy change by a higher saving rate and/or a lower rate of decumulating their assets in the retirement period. More importantly, the participation rate in the retirement period among these individuals is actually lower than before the policy change. Secondly, our findings suggest that disadvantaged individuals (e.g., those with a low education level) are the ones who are heavily affected by the policy reform in terms of a bigger consumption reduction, a more intense labor supply response, and a higher contribution to the social security budget. Finally, we find a small increase in the average labor productivity associated with the policy change. However, by educational attainment, we find evidence which suggests a decrease in labor productivity among individuals with belowaverage educational attainment (i.e., those with a high school degree or lesser), and an increase in labor productivity among those with aboveaverage educational attainment (i.e., those with a college degree or higher). The third essay is an empirical study, which tests the hypothesis of investors' overreaction when trading stocks with limited information, such as neglected stocks. Specifically, we design a fundamental scoring method (called NSCORE) and apply it to the neglected stock universe. We also apply this method to the mostwatched stock universe (called WSCORE). Our results show that the annualized returns of a monthlyrebalancing investment strategy which buys the top 100 NSCORE and sells the bottom 100 NSCORE is 26.31% for the period from the beginning of 1985 to the end of 2009. By contrast, when applying the same screening method to the mostwatched stocks universe during the same time period, the annualized returns of the same investment strategy dropped to about half. This evidence clearly demonstrates the effectiveness of using financial statement data to identify winners and losers among neglected stocks as a result of investors' overreaction. We also find that the returns difference between top and bottom neglected stocks tends to persist for a long time. Specifically, the return difference between the top 100 NSCORE and the bottom 100 NSCORE can last up to 36 months (3 years). On the other hand, the returns difference among most watchedstocks tends to generally disappear after 12 months (1 year). Our comprehensive sensitivity tests confirm that our findings are not subject to wellknown anomalies such as the size, booktomarket, and illiquidity effects.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5071
 Format
 Thesis
 Title
 Jump Dependence and Multidimensional Default Risk: A New Class of Structural Models with Stochastic Intensities.
 Creator

Garreau, Pierre, Kercheval, Alec N., Marquis, Milton H., Beaumont, Paul M., Kopriva, David A., Okten, Giray, Department of Mathematics, Florida State University
 Abstract/Description

This thesis presents a new structural framework for multidimensional default risk. The time of default is the first jump of the logreturns of the stock price of a firm below a stochastic default level. When the stock price is an exponential Levy process, this new formulation is equivalent to a default model with stochastic intensity where the intensity process is parametrized by a Levy measure. This framework calibrates well to various term structures of credit default swaps. Furthermore,...
Show moreThis thesis presents a new structural framework for multidimensional default risk. The time of default is the first jump of the logreturns of the stock price of a firm below a stochastic default level. When the stock price is an exponential Levy process, this new formulation is equivalent to a default model with stochastic intensity where the intensity process is parametrized by a Levy measure. This framework calibrates well to various term structures of credit default swaps. Furthermore, the dependence between the default times of firms within a basket of credit securities is the result of the jump dependence of their respective stock prices: this class of models makes the link between the Equity and Credit markets. As an application, we show the valuation of a firsttodefault swaps. To motivate this new framework, we compute the default probability in a traditional structural model of default where the firm value follows a general Levy processes. This is made possible via the resolution of a partial integrodifferential equation (PIDE). We solve this equation numerically using a spectral element method based on the approximation of the solution with high order polynomials described in (Garreau & Korpiva, 2013). This method is able to handle the sharp kernels in the integral term. It is faster than the competing numerical Laplace transform methods used for first passage time problems, and can be used to compute the price of exotic options with barriers. This PIDE approach does not however extend well in higher dimensions. To understand the joint default of our new framework, we investigate the dependence structures of Levy processes. We show that for two one dimensional Levy processes to form a two dimensional Levy process, their joint survival times need to satisfy a two dimensional version of the memoryless property. We make the link with bivariate exponential random variables and the MarshallOlkin copula. This result yields a necessary construction of dependent Levy processes, a characterization theorem for Poisson random measures and has important ramification for default models with jointly conditionally Poisson processes.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8555
 Format
 Thesis
 Title
 Modeling HighFrequency Order Book Dynamics with Support Vector Machines.
 Creator

Zhang, Yuan, Kercheval, Alec N., Niu, Xufeng, Nichols, Warren, Kim, Kyounghee, Department of Mathematics, Florida State University
 Abstract/Description

A machine learning based framework is proposed in this paper to capture the dynamics of highfrequency limit order books in financial markets and automate the prediction process in realtime on metrics characterizing the dynamics such as midprice and price spread crossing. By representing each entry in a limit order book with a vector of features including price and volume at different levels as well as statistic features derived from limit order book, the proposed framework builds a...
Show moreA machine learning based framework is proposed in this paper to capture the dynamics of highfrequency limit order books in financial markets and automate the prediction process in realtime on metrics characterizing the dynamics such as midprice and price spread crossing. By representing each entry in a limit order book with a vector of features including price and volume at different levels as well as statistic features derived from limit order book, the proposed framework builds a learning model for each metric with the help of multiclass support vector machines (SVMs) to predict the directions of market movement. Experiments with real data as well as synthetic data establish that features selected by the proposed framework have highly differentiating capability, models built are effective and efficient in predictions on price movements, and trading strategies based on resulting models can achieve profitable returns with low risk.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8670
 Format
 Thesis
 Title
 Pricing and Hedging Derivatives with Sharp Profiles Using Tuned High Resolution Finite Difference Schemes.
 Creator

Islim, Ahmed Derar, Kopriva, David A., Winn, Alice, Kercheval, Alec N., Ewald, Brian, Okten, Giray, Department of Mathematics, Florida State University
 Abstract/Description

We price and hedge different financial derivatives with sharp profiles by solving the corresponding advectiondiffusionreaction partial differential equation using new high resolution finite difference schemes, which show superior numerical advantages over standard finite difference methods. High order finite difference methods, which are commonly used techniques in the computational finance literature, fail to handle the discontinuities in the payoff functions of derivatives with...
Show moreWe price and hedge different financial derivatives with sharp profiles by solving the corresponding advectiondiffusionreaction partial differential equation using new high resolution finite difference schemes, which show superior numerical advantages over standard finite difference methods. High order finite difference methods, which are commonly used techniques in the computational finance literature, fail to handle the discontinuities in the payoff functions of derivatives with discontinuous payoff functions, like digital options. Their numerical solutions produce spurious oscillations in the neighborhood of the discontinuities, which make the numerical derivatives prices and hedges impractical. Hence, we extend the linear finite difference methods to overcome these difficulties by developing high resolution nonlinear schemes that resolve these discontinuities and facilitate pricing and hedging these options with higher accuracy. These approximations detect the discontinuous profiles automatically using nonlinear functions, called limiters, and smooth discontinuities minimally and locally to produce nonoscillatory prices and Greeks with high resolution. These limiters are modified and more relaxed versions of standard limiting functions in fluid dynamics area to accommodate for the extra physical diffusion (volatility) in financial problems. We prove that this family of new schemes is total variation diminishing (TVD), which guarantees the non oscillatory solutions. Also, we deduce and illustrate the limiting functions ranges and characteristics that allow the TVD condition to hold. We test these methods to price and hedge financial derivatives with digitallike profiles under BlackScholesMerton (BSM), constant elasticity of variance (CEV) and HeathJarrowMorton (HJM) models. More specifically, we price and hedge digital options under BSM and CEV models, and we price bonds under HJM model. Finally, we price supershare and gap options under the BSM model. Using the new limiters we developed show higher accuracy profiles (solutions) for the option prices and hedges than standard finite difference schemes or standard limiters, and guaranteed nonoscillatory solutions.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8813
 Format
 Thesis
 Title
 Essays in Human Captial Investment.
 Creator

Keightley, Mark P., MacPherson, David A., Kercheval, Alec N., Marquis, Milton H., Bokhari, Farasat, Department of Economics, Florida State University
 Abstract/Description

A topic of great interest to economists, educators, and policy makers for some time has been the effectiveness of college financial aid. The second chapter of this dissertation analyzes the effectiveness of three different types of education policies: tuition subsidies (broad based, merit based, and flat tuition), grant subsidies (broad based and merit based), and loan limit restrictions. A quantitative theory of college is developed within the context of general equilibrium overlapping...
Show moreA topic of great interest to economists, educators, and policy makers for some time has been the effectiveness of college financial aid. The second chapter of this dissertation analyzes the effectiveness of three different types of education policies: tuition subsidies (broad based, merit based, and flat tuition), grant subsidies (broad based and merit based), and loan limit restrictions. A quantitative theory of college is developed within the context of general equilibrium overlapping generations economy. College is modeled as a multiperiod risky investment with endogenous enrollment, timetodegree, and dropout behavior. Tuition costs can be financed using federal grants, student loans, and working while at college. The model predicts that broad based tuition subsidies and grants increase college enrollment. However, due to the correlation between ability and financial resources most of these new students are from the lower end of the ability distribution and eventually dropout or take longer than average to complete college. Merit based education policies counteract this adverse selection problem but at the cost of a muted enrollment response. Our last policy experiment highlights an important interaction between the laborsupply margin and borrowing. A significant decrease in enrollment is found to occur only when borrowing constraints are severely tightened and the option to work while in school is removed. This result suggests that previous models that have ignored the student's labor supply when analyzing borrowing constraints may be insufficient. Recently focused has been directed toward understanding the amount of debt that college students incur. The third chapter analyzes subsidized Stafford loan borrowing between the 199293 and 200304 school years. During this time subsidized Stafford borrowing by fulltime undergraduates increased from 42.9 percent to 50.5 percent. At the same time, the fraction of fulltime subsidized borrowers constrained by the maximum student loan limit increased from 53.7 percent to 67.2 percent. A decomposition method similar to the one developed by Blinder (1973) and Oaxaca (1973) for linear regression models, and later generalized to the probit framework by Even and Macpherson (1990), is used to identify the key factors responsible for the increase in the subsidized Stafford loan participation rate and the increased fraction of subsidized borrowers constrained by the maximum loan limit. The model underlying the decomposition accounts for sample selection in the borrowing decisionmaking process Overall, the results presented in Chapter 3 suggest that increases in the real cost of college can explain the majority of the rise in subsidized Stafford loan borrowing.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd3242
 Format
 Thesis
 Title
 Essays on Economic Fluctuations in a Vintage Capital Model.
 Creator

Tantivong, Wuttipan, Marquis, Milton H., Kercheval, Alec N., Schlagenhauf, Don, Atolia, Manoj, Department of Economics, Florida State University
 Abstract/Description

I use the vintage capital model to study the dynamic response of the economy to changes in technological processes. Technological change is an important factor in determining the growth of productivity and output and in determining the business cycle in United States. Of particular interest is investmentspecific technological change or embodied technological change. There are three essays in this dissertation. In the first essay (Chapter 2), the vintage capital model with heterogeneous labor...
Show moreI use the vintage capital model to study the dynamic response of the economy to changes in technological processes. Technological change is an important factor in determining the growth of productivity and output and in determining the business cycle in United States. Of particular interest is investmentspecific technological change or embodied technological change. There are three essays in this dissertation. In the first essay (Chapter 2), the vintage capital model with heterogeneous labor has been used to explain economic fluctuations with both disembodied and embodied technological progress. I have shown that the two different kinds of technology shocks lead to different responses in the key macroeconomic variables (consumption, investment, and output). The number of vintages of capital good (which establishes the service life of capital) and the sequences (or series) of technology shocks, as well as the persistence of shock processes can also make a difference. The second essay (Chapter 3) examines the rise in the wage premium that has taken place over the last 3 decades, especially during the 1980s. In the vintage capital model with the heterogeneous labor, the framework of the model can examine how new technology affects the demand of labor. I find that technological progress enhances labor productivity and can increase the wage rate of workers, but the increase in the wage premium is much too low to be consistent with that observed in the data. From the perspective of this model, the labor demanddriven factors do not appear to be a plausible explanation for the observed increase in the wage premium. Hence I examine whether labor supply factors may be able to account for the observed dramatic increase in the wage premium. I find that modest changes in the distribution of the workforce can have very large effects on the wage premium. Hence the changes in the supply of the skill distribution of the workforce appear to be a promising avenue of future research. In the final essay (Chapter 4), I focus on the responses in the allocation of time of households to both permanent embodied technology and transitory technology shocks in the vintage capital model in which growth is determined endogenously through investment in human capital. I show that the different sources of technology shocks can lead to different dynamic responses of key macroeconomic variables, especially in the allocation of time. These results suggest differentiating between these shocks may help explain shifts in the cyclical behavior of the economy and may play a significant role in accounting for the evolution of human capital in the economy, and thereby deserve further study.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd1698
 Format
 Thesis
 Title
 A Spectral Element Method to Price Single and MultiAsset European Options.
 Creator

Zhu, Wuming, Kopriva, David A., Huﬀer, Fred, Case, Bettye Anne, Kercheval, Alec N., Okten, Giray, Wang, Xiaoming, Department of Mathematics, Florida State University
 Abstract/Description

We develop a spectral element method to price European options under the BlackScholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a GaussLobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the nonsmooth initial condition. For options with one asset under...
Show moreWe develop a spectral element method to price European options under the BlackScholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a GaussLobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the nonsmooth initial condition. For options with one asset under the jump diffusion model, the convolution integral is approximated by high order GaussLobatto quadratures. A second order implicit/explicit (IMEX) approximation is used to integrate in time, with the convolution integral integrated explicitly. The use of the IMEX approximation in time means that only a block diagonal, rather than full, system of equations needs to be solved at each time step. For options with two variables, i.e., two assets under the BlackScholes model or one asset under the stochastic volatility model, the domain is subdivided into quadrilateral elements. Within each element, the expansion basis functions are chosen to be tensor products of the Legendre polynomials. Three iterative methods are investigated to solve the system of equations at each time step with the corresponding second order time integration schemes, i.e., IMEX and CrankNicholson. Also, the boundary conditions are carefully studied for the stochastic volatility model. The method is spectrally accurate (exponentially convergent) in space and second order accurate in time for European options under all the three models. Spectral accuracy is observed in not only the solution, but also in the Greeks.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0513
 Format
 Thesis
 Title
 Bayesian Modeling and Variable Selection for Complex Data.
 Creator

Li, Hanning, Pati, Debdeep, Huffer, Fred W. (Fred William), Kercheval, Alec N., Sinha, Debajyoti, Bradley, Jonathan R., Florida State University, College of Arts and Sciences,...
Show moreLi, Hanning, Pati, Debdeep, Huffer, Fred W. (Fred William), Kercheval, Alec N., Sinha, Debajyoti, Bradley, Jonathan R., Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

As we routinely encounter highthroughput datasets in complex biological and environment research, developing novel models and methods for variable selection has received widespread attention. In this dissertation, we addressed a few key challenges in Bayesian modeling and variable selection for highdimensional data with complex spatial structures. a) Most Bayesian variable selection methods are restricted to mixture priors having separate components for characterizing the signal and the...
Show moreAs we routinely encounter highthroughput datasets in complex biological and environment research, developing novel models and methods for variable selection has received widespread attention. In this dissertation, we addressed a few key challenges in Bayesian modeling and variable selection for highdimensional data with complex spatial structures. a) Most Bayesian variable selection methods are restricted to mixture priors having separate components for characterizing the signal and the noise. However, such priors encounter computational issues in high dimensions. This has motivated continuous shrinkage priors, resembling the twocomponent priors facilitating computation and interpretability. While such priors are widely used for estimating highdimensional sparse vectors, selecting a subset of variables remains a daunting task. b) Spatial/spatialtemporal data sets with complex structures are nowadays commonly encountered in various scientific research fields ranging from atmospheric sciences, forestry, environmental science, biological science, and social science. Selecting important spatial variables that have significant influences on occurrences of events is undoubtedly necessary and essential for providing insights to researchers. Selfexcitation, which is a feature that occurrence of an event increases the likelihood of more occurrences of the same type of events nearby in time and space, can be found in many natural/social events. Research on modeling data with selfexcitation feature has increasingly drawn interests recently. However, existing literature on selfexciting models with inclusion of highdimensional spatial covariates is still underdeveloped. c) Gaussian Process is among the most powerful model frames for spatial data. Its major bottleneck is the computational complexity which stems from inversion of dense matrices associated with a Gaussian process covariance. Hierarchical divideconquer Gaussian Process models have been investigated for ultra large data sets. However, computation associated with scaling the distributing computing algorithm to handle a large number of subgroups poses a serious bottleneck. In chapter 2 of this dissertation, we propose a general approach for variable selection with shrinkage priors. The presence of very few tuning parameters makes our method attractive in comparison to ad hoc thresholding approaches. The applicability of the approach is not limited to continuous shrinkage priors, but can be used along with any shrinkage prior. Theoretical properties for nearcollinear design matrices are investigated and the method is shown to have good performance in a wide range of synthetic data examples and in a real data example on selecting genes affecting survival due to lymphoma. In Chapter 3 of this dissertation, we propose a new selfexciting model that allows the inclusion of spatial covariates. We develop algorithms which are effective in obtaining accurate estimation and variable selection results in a variety of synthetic data examples. Our proposed model is applied on Chicago crime data where the influence of various spatial features is investigated. In Chapter 4, we focus on a hierarchical Gaussian Process regression model for ultrahigh dimensional spatial datasets. By evaluating the latent Gaussian process on a regular grid, we propose an efficient computational algorithm through circulant embedding. The latent Gaussian process borrows information across multiple subgroups, thereby obtaining a more accurate prediction. The hierarchical model and our proposed algorithm are studied through simulation examples.
Show less  Date Issued
 2017
 Identifier
 FSU_FALL2017_Li_fsu_0071E_14159
 Format
 Thesis
 Title
 Ensemble Methods for Capturing Dynamics of Limit Order Books.
 Creator

Wang, Jian, Zhang, Jinfeng, Ökten, Giray, Kercheval, Alec N., Mio, Washington, Simon, Capstick C., Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

According to rapid development in information technology, limit order books(LOB) mechanism has emerged to prevail in today's nancial market. In this paper, we propose ensemble machine learning architectures for capturing the dynamics of highfrequency limit order books such as predicting price spread crossing opportunities in a future time interval. The paper is more datadriven oriented, so experiments with ve realtime stock data from NASDAQ, measured by nanosecond, are established. The...
Show moreAccording to rapid development in information technology, limit order books(LOB) mechanism has emerged to prevail in today's nancial market. In this paper, we propose ensemble machine learning architectures for capturing the dynamics of highfrequency limit order books such as predicting price spread crossing opportunities in a future time interval. The paper is more datadriven oriented, so experiments with ve realtime stock data from NASDAQ, measured by nanosecond, are established. The models are trained and validated by training and validation data sets. Compared with other models, such as logistic regression, support vector machine(SVM), our outofsample testing results has shown that ensemble methods had better performance on both statistical measurements and computational eciency. A simple trading strategy that we devised by our models has shown good prot and loss(P&L) results. Although this paper focuses on limit order books, the similar frameworks and processes can be extended to other classication research area. Keywords: limit order books, highfrequency trading, data analysis, ensemble methods, F1 score.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Wang_fsu_0071E_14047
 Format
 Thesis
 Title
 Scalable and Structured High Dimensional Covariance Matrix Estimation.
 Creator

Sabnis, Gautam, Pati, Debdeep, Kercheval, Alec N., Sinha, Debajyoti, Chicken, Eric, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

With rapid advances in data acquisition and storage techniques, modern scientific investigations in epidemiology, genomics, imaging and networks are increasingly producing challenging data structures in the form of highdimensional vectors, matrices and multiway arrays (tensors) rendering traditional statistical and computational tools inappropriate. One hope for meaningful inferences in such situations is to discover an inherent lowerdimensional structure that explains the physical or...
Show moreWith rapid advances in data acquisition and storage techniques, modern scientific investigations in epidemiology, genomics, imaging and networks are increasingly producing challenging data structures in the form of highdimensional vectors, matrices and multiway arrays (tensors) rendering traditional statistical and computational tools inappropriate. One hope for meaningful inferences in such situations is to discover an inherent lowerdimensional structure that explains the physical or biological process generating the data. The structural assumptions impose constraints that force the objects of interest to lie in lowerdimensional spaces, thereby facilitating their estimation and interpretation and, at the same time reducing computational burden. The assumption of an inherent structure, motivated by various scientific applications, is often adopted as the guiding light in the analysis and is fast becoming a standard tool for parsimonious modeling of such high dimensional data structures. The content of this thesis is specifically directed towards methodological development of statistical tools, with attractive computational properties, for drawing meaningful inferences though such structures. The third chapter of this thesis proposes a distributed computing framework, based on a divide and conquer strategy and hierarchical modeling, to accelerate posterior inference for highdimensional Bayesian factor models. Our approach distributes the task of highdimensional covariance matrix estimation to multiple cores, solves each subproblem separately via a latent factor model, and then combines these estimates to produce a global estimate of the covariance matrix. Existing divide and conquer methods focus exclusively on dividing the total number of observations n into subsamples while keeping the dimension p fixed. The approach is novel in this regard: it includes all of the n samples in each subproblem and, instead, splits the dimension p into smaller subsets for each subproblem. The subproblems themselves can be challenging to solve when p is large due to the dependencies across dimensions. To circumvent this issue, a novel hierarchical structure is specified on the latent factors that allows for flexible dependencies across dimensions, while still maintaining computational efficiency. Our approach is readily parallelizable and is shown to have computational efficiency of several orders of magnitude in comparison to fitting a full factor model. The fourth chapter of this thesis proposes a novel way of estimating a covariance matrix that can be represented as a sum of a lowrank matrix and a diagonal matrix. The proposed method compresses highdimensional data, computes the sample covariance in the compressed space, and lifts it back to the ambient space via a decompression operation. A salient feature of our approach relative to existing literature on combining sparsity and lowrank structures in covariance matrix estimation is that we do not require the lowrank component to be sparse. A principled framework for estimating the compressed dimension using Stein's Unbiased Risk Estimation theory is demonstrated. In the final chapter of this thesis, we tackle the problem of variable selection in high dimensions. Consistent model selection in high dimensions has received substantial interest in recent years and is an extremely challenging problem for Bayesians. The literature on model selection with continuous shrinkage priors is even lessdeveloped due to the unavailability of exact zeros in the posterior samples of parameter of interest. Heuristic methods based on thresholding the posterior mean are often used in practice which lack theoretical justification, and inference is highly sensitive to the choice of the threshold. We aim to address the problem of selecting variables through a novel method of post processing the posterior samples.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Sabnis_fsu_0071E_14043
 Format
 Thesis
 Title
 Optimal Portfolio Execution under TimeVarying Liquidity Constraints.
 Creator

Lin, HuaYi, Fahim, Arash, Atkins, Jennifer, Kercheval, Alec N., Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

The problem of optimal portfolio execution has become one of the most important problems in the area of financial mathematics. Over the past two decades, numerous researchers have developed a variety of different models to address this problem. In this dissertation, we extend the LOB (Limit Order Book) model proposed by Obizhaeva and Wang (2013) by incorporating a more realistic assumption on the order book depth; the amount of liquidity provided by a LOB market is finite at all times. We use...
Show moreThe problem of optimal portfolio execution has become one of the most important problems in the area of financial mathematics. Over the past two decades, numerous researchers have developed a variety of different models to address this problem. In this dissertation, we extend the LOB (Limit Order Book) model proposed by Obizhaeva and Wang (2013) by incorporating a more realistic assumption on the order book depth; the amount of liquidity provided by a LOB market is finite at all times. We use an algorithmic approach to solve the problem of optimal execution under timevarying constraints on the depth of a LOB. For the simplest case where the order book depth stays at a fixed level for the entire trading horizon, we reduce the optimal execution problem into a onedimensional rootfinding problem which can be readily solved by standard numerical algorithms. When the depth of the LOB is monotone in time, we first apply the KKT (KarushKuhnTucker) conditions to narrow down the set of candidate strategies and then use a dichotomybased search algorithm to pin down the optimal one. For the general case that the order book depth doesn't exhibit any particular pattern, we start from the optimal strategy subject to no liquidity constraints and iterate over execution strategy by sequentially adding more constraints to the problem in a specific fashion until primal feasibility is achieved. Numerical experiments indicate that our algorithms give comparable results to those of current existing convex optimization toolbox CVXOPT with significantly lower time complexity.
Show less  Date Issued
 2018
 Identifier
 2018_Sp_Lin_fsu_0071E_14349
 Format
 Thesis
 Title
 QuasiMonte Carlo and Markov Chain QuasiMonte Carlo Methods in Estimation and Prediction of Time Series Models.
 Creator

Tzeng, YuYing, Ökten, Giray, Beaumont, Paul M., Srivastava, Anuj, Kercheval, Alec N., Kim, Kyounghee (Professor of Mathematics), Florida State University, College of Arts and...
Show moreTzeng, YuYing, Ökten, Giray, Beaumont, Paul M., Srivastava, Anuj, Kercheval, Alec N., Kim, Kyounghee (Professor of Mathematics), Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

Randomized quasiMonte Carlo (RQMC) methods were first developed in mid 1990’s as a hybrid of Monte Carlo and quasiMonte Carlo (QMC) methods. They were designed to have the superior error reduction properties of lowdiscrepancy sequences, but also amenable to the statistical error analysis Monte Carlo methods enjoy. RQMC methods are used successfully in applications such as option pricing, high dimensional numerical integration, and uncertainty quantification. This dissertation discusses the...
Show moreRandomized quasiMonte Carlo (RQMC) methods were first developed in mid 1990’s as a hybrid of Monte Carlo and quasiMonte Carlo (QMC) methods. They were designed to have the superior error reduction properties of lowdiscrepancy sequences, but also amenable to the statistical error analysis Monte Carlo methods enjoy. RQMC methods are used successfully in applications such as option pricing, high dimensional numerical integration, and uncertainty quantification. This dissertation discusses the use of RQMC and QMC methods in econometric time series analysis. In time series simulation, the two main problems are parameter estimation and forecasting. The parameter estimation problem involves the use of Markov chain Monte Carlo (MCMC) algorithms such as MetropolisHastings and Gibbs sampling. In Chapter 3, we use an approximately completely uniform distributed sequence which was recently discussed by Owen et al. [2005], and an RQMC sequence introduced by O ̈kten [2009], in some MCMC algorithms to estimate the parameters of a Probit and SVlogAR(1) model. Numerical results are used to compare these sequences with standard Monte Carlo simulation. In the time series forecasting literature, there was an earlier attempt to use QMC by Li and Winker [2003], which did not provide a rigorous error analysis. Chapter 4 presents how RQMC can be used in time series forecasting with its proper error analysis. Numerical results are used to compare various sequences for a simple AR(1) model. We then apply RQMC to compute the valueatrisk and expected shortfall measures for a stock portfolio whose returns follow a highly nonlinear Markov switching stochastic volatility model which does not admit analytical solutions for the returns distribution. The proper use of QMC and RQMC methods in Monte Carlo and Markov chain Monte Carlo algorithms can greatly reduce the computational error in many applications from sciences, en gineering, economics and finance. This dissertation brings the proper (R)QMC methodology to time series simulation, and discusses the advantages as well as the limitations of the methodology compared the standard Monte Carlo methods.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Tzeng_fsu_0071E_13607
 Format
 Thesis
 Title
 Capital Flow Dynamics: Theory and Evidence.
 Creator

Newell, Graham David, Atolia, Manoj, Kercheval, Alec N., Dmitriev, Mikhail I., Kreamer, Jonathan, Florida State University, College of Social Sciences and Public Policy,...
Show moreNewell, Graham David, Atolia, Manoj, Kercheval, Alec N., Dmitriev, Mikhail I., Kreamer, Jonathan, Florida State University, College of Social Sciences and Public Policy, Department of Economics
Show less  Abstract/Description

My dissertation investigates the dynamics of international capital flows, distinguishing between net and gross flows. Chapter One considers both net and gross capital flows. I develop a small open economy model which endogenous sudden stops in net capital inflows. I then show how a second electronic currency can be used as a policy tool to reduce the volatility of capital flows. I also examine the empirical regularities of gross capital flows for the G7 countries and the implications for...
Show moreMy dissertation investigates the dynamics of international capital flows, distinguishing between net and gross flows. Chapter One considers both net and gross capital flows. I develop a small open economy model which endogenous sudden stops in net capital inflows. I then show how a second electronic currency can be used as a policy tool to reduce the volatility of capital flows. I also examine the empirical regularities of gross capital flows for the G7 countries and the implications for gross flows from a data set of investment decisions from a selection of large U.S. public pension funds. I document three important patterns in the aggregate data. First, gross capital flows are highly volatile. Second, there is a strong positive relationship between capital inflows and outflows. Third, gross capital flows are acyclical when accounting for the global financial cycle; global factors, rather than the domestic business cycle, account for a significantly greater proportion of the variation in gross flows. From firmlevel pension fund data, I find that international investment decisions are large enough to contribute to gross flow volatility. For periphery economies which are the recipient of these firms' equity investments, participation in those economies is variable with firms entering, exiting, and changing the mix of markets in which they invest. The cost structure of foreign investing suggests that fixed participation costs are statistically significant and quantitatively important. The stylized facts I document are at odds with the economic theory regarding capital flows, therefore, in Chapter Two, I develop a large openeconomy portfolio choice model and solve it globally with a novel solution algorithm. Using this model I show that fixed participation costs for investing abroad of less than ten basis points is sufficient to reproduce both the observed volatility of gross capital flows and the correlation between inflows and outflows.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Newell_fsu_0071E_15058
 Format
 Thesis
 Title
 Random Walks over Point Processes and Their Application in Finance.
 Creator

Salehy, Seyyed Navid, Kercheval, Alec N., Ewald, Brian, Fahim, Arash, Ökten, Giray, Huffer, Fred W. (Fred William), Florida State University, College of Arts and Sciences,...
Show moreSalehy, Seyyed Navid, Kercheval, Alec N., Ewald, Brian, Fahim, Arash, Ökten, Giray, Huffer, Fred W. (Fred William), Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

In continuoustime models in finance, it is common to assume that prices follow a geometric Brownian motion. More precisely, it is assumed that the price at time t ≥ 0 is given by Zt = Z₀exp(σBt + mt) where Z₀ is the initial price, B is standard Brownian motion, σ is the volatility, and m is the drift. We discuss how Z can be viewed as the limit of a sequence of discrete price models based on random walks. We note that in the usual random walks, jumps can only happen at deterministic times....
Show moreIn continuoustime models in finance, it is common to assume that prices follow a geometric Brownian motion. More precisely, it is assumed that the price at time t ≥ 0 is given by Zt = Z₀exp(σBt + mt) where Z₀ is the initial price, B is standard Brownian motion, σ is the volatility, and m is the drift. We discuss how Z can be viewed as the limit of a sequence of discrete price models based on random walks. We note that in the usual random walks, jumps can only happen at deterministic times. We first construct a natural simple model for price by considering a random walk in which jumps can happen at random times following a counting process N. We then develop a sequence of discrete price models using random walks over point processes. The limit process gives the new price model: Zt = Z₀exp(σBΛt + mΛt), where Λ is the compensator for the counting process N. We note that if N is a Poisson process with intensity 1, then this model coincides with the geometric Brownian motion model for the price. But this new model provides more flexibility as we can choose N to be many other wellknown counting processes. This includes not only homogeneous and inhomogeneous Poisson processes which have deterministic compensators but also Hawkes processes which have stochastic compensators. We also discuss and prove many properties for the process BΛ. For example, we show that BΛ is a continuous square integrable martingale. Moreover, we discuss when BΛ has uncorrelated increments and when it has independent increments. Moreover, we investigate how the BlackScholes pricing formula will change if the price of the risky asset follows this new model when N is an inhomogeneous Poisson process. We show that the usual BlackScholes formula is obtained when the counting process N is a Poisson process with intensity 1.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Salehy_fsu_0071E_15152
 Format
 Thesis
 Title
 Essays on Sovereign Debt and Partial Default.
 Creator

Feng, Shuang, Atolia, Manoj, Kercheval, Alec N., Dmitriev, Mikhail I., Marquis, Milton H., Florida State University, College of Social Sciences and Public Policy, Department of...
Show moreFeng, Shuang, Atolia, Manoj, Kercheval, Alec N., Dmitriev, Mikhail I., Marquis, Milton H., Florida State University, College of Social Sciences and Public Policy, Department of Economics
Show less  Abstract/Description

My dissertation involves the study of sovereign debt flows with an emphasis on the determinants and characteristics of sovereign default in emerging markets, accounting for the partial nature of this default. It consists of three chapters: Firstly, Chapter One and Chapter Two empirically explore the determinants of sovereign default. Secondly, Chapter Three quantitatively investigates the macroeconomic implications of the partial nature of sovereign default. In Chapter One and Chapter Two, I...
Show moreMy dissertation involves the study of sovereign debt flows with an emphasis on the determinants and characteristics of sovereign default in emerging markets, accounting for the partial nature of this default. It consists of three chapters: Firstly, Chapter One and Chapter Two empirically explore the determinants of sovereign default. Secondly, Chapter Three quantitatively investigates the macroeconomic implications of the partial nature of sovereign default. In Chapter One and Chapter Two, I empirically examine the monetary and default responses of sovereign countries to the fluctuations in world commodity prices in a panel of 21 emerging countries with annual observations from 1970 to 2013, constructing and using a countryspecific commodity price index with timevarying weights. The selection of the sample countries is based on both the definition of an emerging market by the International Monetary Fund (IMF) and the available information of the external public and publiclyguaranteed (PPG) debt arrears in the World Development Indicators (WDI) database. It is commonly known that emerging markets are vulnerable to global shocks, such as the shocks to world commodity prices. Large fluctuations in world commodity prices over the business cycles can greatly affect their foreign revenues, causing excessive external imbalances in international payments. Countries with external imbalance, especially with excessive current account deficits, are most likely to experience limited spending due to constraints on the intertemporal substitution in expenditure and are likely to default on their external debt denominated in foreign currencies. I choose world commodity prices as the main predictor for the monetary and default responses to investigate, because for many of the countries in the sample, commodities are a large proportion of their export and foreign revenues. Large fluctuations in commodity prices, causing external imbalance, greatly affect their ability to service the external debt, which is typically denominated in foreign currencies. Chapter One, "World commodity prices, money, and foreign exchange in emerging markets: New evidence", investigates countries' monetary responses (the change of broad money and the choice of exchange rate regimes) when they are encountering the foreign revenue reduction caused by the fluctuations in world commodity prices, before making the decision to default. The estimated results show that the declines in world commodity prices significantly, positively affect the ratio of broad money to GDP and that countries tend to have more flexible exchange rate regimes when world commodity prices are depressed for an extended period. The investigation of the response of the broad money supply is consistent with the open economy trilemma and the estimates of the exchange rate regime flexibility fills the gap of the literature on the determinants of the choice of exchange rate regimes. By capturing the type of global shock as well as the timevarying country characteristics, the effect of the price index (excluding the country fixed effect) well explains the timeseries variation and countryspecific variation of the exchange rate regimes. Chapter Two, "World commodity prices and partial default in emerging markets: An empirical analysis", mainly explores the effects of the fluctuation in world commodity prices on sovereign default. The results show that the decrease in the price index increases the default rate. The response of the default rate varies across countries and it generally increases with a country's dependence on the commodity exports and external indebtedness. This chapter provides the first economicallysignificant, quantitative estimates of the effect of world commodity prices on the default rate. A few unique features of my approach allow me to make this contribution: The first feature is the price index. In this chapter, the details of developing the novel countryspecific commodity price index are given. The price index is constructed with a twostage aggregation using timevarying weights based on commodities exported and is used as the main explanatory variable. By accounting for the changes in export structure, the timevarying weights allow me to use data for a longer period, from 1970 to 2013, for my analysis, covering the emerging markets debt crisis, currency crisis, and the recent contraction. This countryspecific nature of the price index helps control for other common, global shocks in the estimation. The second feature is that I focus on the realized default risk and use the partial default rate, other than default events, country spreads, or credit ratings, as the proxy for that risk. Along with the price index, the default rate provides longerperiod data and allows me to do the analysis starting from 1970. In Chapter Three, "Sovereign debt: A quantitative comparative investigation of the partial default mechanism", I build and quantitatively solve the partial default models of a small open economy, in both endowment and production environments, to investigate the responses of the borrowing, default, and pricing of sovereign debt to economic shocks and to examine how the partial default mechanism improves the predictions of the sovereign default models. The simulation results of the models can well predict the country spreads, defaultrelated statistics, and other business cycle indicators. My models assume the nonexclusion from the international capital market after default. Thus, I can also examine the impulse responses of various macroeconomic variables to the shocks to better understand the underlying propagation mechanism of partial default. The unrealistic assumptions and the limited prediction performance of full default models are the two main reasons that motivate me to choose the partial default mechanism and to build the partial default models. Firstly, the standard theory of sovereign default assumes that countries always default on all of their debt and are excluded from the international capital market after default. However, the empirical regularities show that countries always default on only part of their debt and they continue to borrow while having debt arrears. Besides being inaccurate assumptions, the full default model has the limitations in terms of predicting some of the critical debt indicators, like the debttooutput ratio and the default frequency, although it can predict that default happens in bad times and can predict countercyclical country spreads. The partial default models can improve the predictions of the debt level and the default frequency without losing the performance of matching other data moments. Moreover, it also can predict the partial default rate, which the full default model is not designed to and cannot predict. The partial default models in Chapter Three have three features: firstly, the partial default is endogenouslydetermined, which allows me to compute the default rate; secondly, there is a preemptive recovery payment of the default, which enables the price function of the shortterm debt to have the feature that the price of the longterm debt has; and thirdly, there is no exclusion from the international capital market after default, so I can examine the impulse responses of various macroeconomic variables. Compared with the partial default model with endowment, the partial default model with production generates better predictions for the debt service and improves the overpredicted volatilities of consumption and interest spreads. Besides simultaneously matching the mean spread and the debttooutput ratio, its simulated results can predict the procyclical investment and closely match the relative volatility of investment.
Show less  Date Issued
 2019
 Identifier
 2019_Summer_Feng_fsu_0071E_15304
 Format
 Thesis
 Title
 A Stock Market AgentBased Model Using Evolutionary Game Theory and Quantum Mechanical Formalism.
 Creator

Montin, Benoit S., Nolder, Craig A., Huﬀer, Fred W., Case, Bettye Anne, Beaumont, Paul M., Kercheval, Alec N., Sumners, DeWitt L., Department of Mathematics, Florida State...
Show moreMontin, Benoit S., Nolder, Craig A., Huﬀer, Fred W., Case, Bettye Anne, Beaumont, Paul M., Kercheval, Alec N., Sumners, DeWitt L., Department of Mathematics, Florida State University
Show less  Abstract/Description

The financial market is modelled as a complex selforganizing system. Three economic agents interact in a simplified economy and seek the maximization of their wealth. Replicator dynamics are used as a myopic behavioral rule to describe how agents learn and benefit from their experiences. Stock price fluctuations result from interactions between economic agents, budget constraints and conservation laws. Time is discrete. Invariant distributions over the state space, that is to say probability...
Show moreThe financial market is modelled as a complex selforganizing system. Three economic agents interact in a simplified economy and seek the maximization of their wealth. Replicator dynamics are used as a myopic behavioral rule to describe how agents learn and benefit from their experiences. Stock price fluctuations result from interactions between economic agents, budget constraints and conservation laws. Time is discrete. Invariant distributions over the state space, that is to say probability measures that remain unchanged by the oneperiod transition rule, form stochastic equilibria for our composite system. When agents make mistakes, there is a unique stochastic steady state which reflects the average and limit behavior. Convergence of the iterates occurs at a geometric rate in the total variation norm. Interestingly, when the probability of making a mistake tends to zero, the invariant distribution converges weakly to a stochastic equilibrium for the model without mistakes. Most agentbased computational economies heavily rely on simulations. Having adopted a simple representation of financial markets, we have been able to prove the above theoretical results and gain intuition on complexity economics. The impact of simple monetary policies on the limit stock price distribution, such as a decrease of the riskfree rate of interest, has been analyzed. Of interest as well, the limit stock log return distribution presents realworld features (skewed and leptokurtic) that more traditional models usually fail to explain or consider. Our artificial market is incomplete. The bid and ask prices of a vanilla Call option have been computed to illustrate option pricing in our setting.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd2331
 Format
 Thesis
 Title
 Asset Pricing in a Lucas Framework with Boundedly Rational, Heterogeneous Agents.
 Creator

Culham, Andrew J. (Andrew James), Beaumont, Paul M., Kercheval, Alec N., Schlagenhauf, Don, Goncharov, Yevgeny, Kopriva, David, Department of Mathematics, Florida State University
 Abstract/Description

The standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agentbased computational models where artificial economies are populated with agents who trade in stylized asset...
Show moreThe standard dynamic general equilibrium model of financial markets does a poor job of explaining the empirical facts observed in real market data. The common assumptions of homogeneous investors and rational expectations equilibrium are thought to be major factors leading to this poor performance. In an attempt to relax these assumptions, the literature has seen the emergence of agentbased computational models where artificial economies are populated with agents who trade in stylized asset markets. Although they offer a great deal of flexibility, the theoretical community has often criticized these agentbased models because the agents are too limited in their analytical abilities. In this work, we create an artificial market with a single risky asset and populate it with fully optimizing, forward looking, infinitely lived, heterogeneous agents. We restrict the state space of our agents by not allowing them to observe the aggregate distribution of wealth so they are required to compute their conditional demand functions while simultaneously learning the equations of motion for the aggregate state variables. We develop an efficient and flexible model code that can be used to explore a wide number of asset pricing questions while remaining consistent with conventional asset pricing theory. We validate our model and code against known analytical solutions as well as against a new analytical result for agents with differing discount rates. Our simulation results for general cases without known analytical solutions show that, in general, agents' asset holdings converge to a steadystate distribution and the agents are able to learn the equilibrium prices despite the restricted state space. Further work will be necessary to determine whether the exceptional cases have some fundamental theoretical explanation or can be attributed to numerical issues. We conjecture that convergence to the equilibrium is global and that the marketclearing price acts to guide the agents' forecasts toward that equilibrium.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd2948
 Format
 Thesis
 Title
 Alternative Models for Stochastic Volatility Corrections for Equity and Interest Rate Derivatives.
 Creator

Liang, Tianyu, Kercheval, Alec N., Wang, Xiaoming, Liu, Ewald, Brian, Nichols, Warren D., Department of Mathematics, Florida State University
 Abstract/Description

A lot of attention has been paid to the stochastic volatility model where the volatility is randomly fluctuating driven by an additional Brownian motion. In our work, we change the mean level in the meanreverting process from a constant to a function of the underlying process. We apply our models to the pricing of both equity and interest rate derivatives. Throughout the thesis, a singular perturbation method is employed to derive closedform formulas up to first order asymptotic solutions....
Show moreA lot of attention has been paid to the stochastic volatility model where the volatility is randomly fluctuating driven by an additional Brownian motion. In our work, we change the mean level in the meanreverting process from a constant to a function of the underlying process. We apply our models to the pricing of both equity and interest rate derivatives. Throughout the thesis, a singular perturbation method is employed to derive closedform formulas up to first order asymptotic solutions. We also implement multiplicative noise to arithmetic OrnsteinUhlenbeck process to produce a wider variety of effects. Calibration and Monte Carlo simulation results show that the proposed model outperform Fouque's original stochastic volatility model during some particular window in history. A more efficient numerical scheme, the heterogeneous multiscale method (HMM), is introduced to simulate the multiscale differential equations discussed over the chapters.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4990
 Format
 Thesis
 Title
 Calibration of Local Volatility Models and Proper Orthogonal Decomposition Reduced Order Modeling for Stochastic Volatility Models.
 Creator

Geng, Jian, Navon, Ionel Michael, Case, Bettye Anne, Contreras, Rob, Okten, Giray, Kercheval, Alec N., Ewald, Brian, Department of Mathematics, Florida State University
 Abstract/Description

There are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters IIIII), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no...
Show moreThere are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters IIIII), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no interpolation or extrapolation of either the option prices or the volatility surface. We do not make any assumption regarding the shape of the volatility surface except to assume that it is smooth. Due to the smoothness assumption, we apply a second order Tikhonov regularization. We choose the Tikhonov regularization parameter as one of the singular values of the Jacobian matrix of the Dupire model. Finally we perform extensive numerical tests to assess and verify the aforementioned techniques for both local volatility models with known analytical solutions of European option prices and real market option data. In the second part of this thesis (chapters IVV), stochastic volatility models, POD reduced order modeling are introduced first respectively. Then POD reduced order modeling is applied to the Heston stochastic volatility model for the pricing of European options. Finally, chapter VI summaries the thesis and points out future research areas.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7388
 Format
 Thesis
 Title
 ΓRay Spectroscopic Study of Calcium48,49 and Scandium50 Focusing on Low Lying Octupole Vibration Excitations.
 Creator

McPherson, David M. (David Marc), Cottle, Paul D. (Paul Davis), Kercheval, Alec N., Cao, Jianming, Piekarewicz, Jorge, Riley, Mark A., Florida State University, College of Arts...
Show moreMcPherson, David M. (David Marc), Cottle, Paul D. (Paul Davis), Kercheval, Alec N., Cao, Jianming, Piekarewicz, Jorge, Riley, Mark A., Florida State University, College of Arts and Sciences, Department of Physics
Show less  Abstract/Description

An inverse kinematic proton scattering experiment was performed at the National Superconducting Cyclotron Laboratory (NSCL) using the GRETINAS800 detector system in conjunction with the Ursinus College liquid hydrogen target. $\gamma$ray yields from the experiment were determined using geant4 simulations, generating state population cross sections. These cross sections were used to extract the delta_3 deformation length for the lowlying octupole vibration excitations in Ca48,49 using the...
Show moreAn inverse kinematic proton scattering experiment was performed at the National Superconducting Cyclotron Laboratory (NSCL) using the GRETINAS800 detector system in conjunction with the Ursinus College liquid hydrogen target. $\gamma$ray yields from the experiment were determined using geant4 simulations, generating state population cross sections. These cross sections were used to extract the delta_3 deformation length for the lowlying octupole vibration excitations in Ca48,49 using the coupled channels analysis code fresco. Particlecore coupling in Ca49 was studied in comparison to Ca48 through determination of the neutron and proton deformation lengths. The total inverse kinematic proton scattering deformation lengths were evaluated for the lowlying octupole vibration excitations in Ca48,49 to be delta_3(Ca48, 3^_1) = 1.0(2)fm, delta_3(Ca49, 9/2^+_1) = 1.2(1)fm, delta_3 (Ca49, 9/2^+_1) = 1.5(2)fm, delta_3(Ca49, 5/2^+_1) = 1.1(1)fm. Proton and neutron deformation lengths for two of these octupole states were also determined to be delta_p(Ca48, 3^_1) = 0.9(1)fm, delta_p (Ca49, 9/2^+_1) = 1.0(1)fm, delta_n(Ca48, 3^_1) = 1.1(3)fm, and delta_n(Ca49, 9/2^+_1) = 1.3(3)fm. Additionally, the ratios of the neutron to proton transition matrix elements were also determined for these two states to be M_n/M_p(Ca48, 3^_1) = 1.7(6) and M_n/M_p(Ca49, 9/2^+_1) = 2.0(5). Statistically, the derived values for these two nuclei are nearly identical.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9650
 Format
 Thesis
 Title
 Asset Pricing Equilibria for Heterogeneous, LimitedInformation Agents.
 Creator

Jones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences,...
Show moreJones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

The standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe...
Show moreThe standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe simplifying assumptions. The reason for this difficulty is that heterogeneous agent models generate an endogenously complicated distribution of wealth across the agents. The state space for each agent's optimization problem includes the complex dynamics of the wealth distribution. There is no general way to characterize the interaction between the distribution of wealth and the macroeconomic aggregates. To address this issue, we implement an agentbased model where the agents have bounded rationality. In our model, we have a complete markets economy with two agents and two assets. The agents are heterogeneous and utility maximizing with constant coefficient of relative risk aversion [CRRA] preferences. How the agents address the stochastic behaviour of the evolution of the wealth distribution is central to our task since aggregate prices depend on this behaviour. An important component of this dissertation involves dealing with the computational difficulty of dynamic heterogeneousagent models. That is, in order to predict prices, agents need a way to keep track of the evolution of the wealth distribution. We do this by allowing each agent to assume that a priceequivalent representative agent exists and that the representative agent has a constant coefficient of relative risk aversion. In so doing, the agents are able to formulate predictive pricing and demand functions which allow them to predict aggregate prices and make consumption and investment decisions each period. However, the agents' predictions are only approximately correct. Therefore, we introduce a learning mechanism to maintain the required level of accuracy in the agents' price predictions. From this setup, we find that the model, with learning, will converge over time to an approximate expectations equilibrium, provided that the the initial conditions are close enough to the rational expectations equilibrium prices. Two main contributions in our work are: 1) to formulate a new concept of approximate equilibria, and 2) to show how equilibria can be approximated numerically, despite the fact that the true state space at any point in time is mathematically complex. These contributions offer the possibility of characterizing a new class of asset pricing models where agents are heterogeneous and only just slightly limited in their rationality. That is, the partially informed agents in our model are able to forecast and utilitymaximize only just as well as economists who face problems of estimating aggregate variables. By using an exogenously assigned adaptive learning rule, we analyse this implementation in a Lucastype heterogeneous agent model. We focus on the sensitivity of the risk parameter and the convergence of the model to an approximate expectations equilibrium. Also, we study the extent to which adaptive learning is able to explain the empirical findings in an asset pricing model with heterogeneous agents.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9624
 Format
 Thesis
 Title
 Estimating Sensitivities of Exotic Options Using Monte Carlo Methods.
 Creator

Yuan, Wei, Ökten, Giray, Kim, Kyounghee, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren, Florida State University, College of Arts and Sciences, Department...
Show moreYuan, Wei, Ökten, Giray, Kim, Kyounghee, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

In this dissertation, methods of estimating the sensitivity of complex exotic options, including options written on multiple assets, and have discontinuous payoffs, are investigated. The calculation of the sensitivities (Greeks) is based on the finite difference method, pathwise method, likelihood ratio method and kernel method, via Monte Carlo or quasiMonte Carlo simulation. Direct Monte Carlo estimators for various sensitivities of weather derivatives and mountain range options are given....
Show moreIn this dissertation, methods of estimating the sensitivity of complex exotic options, including options written on multiple assets, and have discontinuous payoffs, are investigated. The calculation of the sensitivities (Greeks) is based on the finite difference method, pathwise method, likelihood ratio method and kernel method, via Monte Carlo or quasiMonte Carlo simulation. Direct Monte Carlo estimators for various sensitivities of weather derivatives and mountain range options are given. The numerical results show that the pathwise method outperforms other methods when the payoff function is Lipschitz continuous. The kernel method and the central finite difference methods are competitive when the payoff function is discontinuous.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9528
 Format
 Thesis
 Title
 Essays in Fiscal Policy: Computational Accuracy and Optimal Investments in Public, Private, and Human Capital.
 Creator

Awad, Bassam R. (Bassam Rasheed), Marquis, Milton H., Kercheval, Alec N., Schlagenhauf, Don, Atolia, Manoj, Department of Economics, Florida State University
 Abstract/Description

This dissertation is entitled "Essays in Fiscal Policy: Computational Accuracy and Optimal Investments in Public, Private, and Human Capital." The dissertation tries to answer three questions. First, how good are linearization and higherorder approximations in an endogenous growth model with public capital? Second, how important are transitional dynamics when assessing alternative fiscal policies when comparing longrun growth versus welfare? Third, is a consumption tax optimal in an economy...
Show moreThis dissertation is entitled "Essays in Fiscal Policy: Computational Accuracy and Optimal Investments in Public, Private, and Human Capital." The dissertation tries to answer three questions. First, how good are linearization and higherorder approximations in an endogenous growth model with public capital? Second, how important are transitional dynamics when assessing alternative fiscal policies when comparing longrun growth versus welfare? Third, is a consumption tax optimal in an economy with public and private human capital when the tax structure is timeinvariant? To answer these questions, I have used an endogenous growth model where growth is driven by the accumulation of human capital and fueled through a public capital externality. I have demonstrated that the policies that induce the highest rates of economic growth do not always provide the highest welfare. In addition, the traditional methods of analyzing the economic consequences of alternative tax policy regimes can produce very large approximation errors, which do not occur in the method employed in my analysis. Moreover, the shortrun implications of tax reforms can be very different from the longrun consequences, and it may take many years for the benefits of tax reform to offset shortterm losses. Chapter 2 investigates the errors of linearization and higherorder approximations in comparison with the actual nonlinear solution of the dynamic general equilibrium model. The standard procedure for analyzing transitional dynamics in nonlinear macro models has been to employ linear approximations. Recently quadratic approximations have been explored. This paper examines the accuracy of these and higherorder approximations in an endogenous growth model with public capital, thereby extending the work done in the current literature on the neoclassical growth model. I find that significant errors may persist in computed transition paths and welfare even after resorting to approximations as high as fourth order. Moreover, the accuracy of approximations may not increase monotonically with the increase in the order of approximation. Also, as in the previous literature, I find that achieving acceptable levels of accuracy when computing the welfare consequences of a policy change typically requires a higher order approximation than attaining similar levels of accuracy in the computation of the transition path: typically an increase in order of approximation by one is sufficient. Chapter 3 analyzes the effects of distortionary taxes on growth and welfare in an endogenous growth model with a public capital externality. The model is calibrated to the U.S. economy, and experiments are run under which the tax regime is shifted from the current mix of capital income, labor income, and consumption taxes to a fiscal policy regime with complete reliance on a single source of taxation, including a lumpsum tax. I find that tax policy changes that induce a higher growth rate do not necessarily result in a higher welfare due to different transitory effects. In fact, a shift to exclusive reliance on a capital income tax while delivering the highest longrun growth results in the lowest welfare. Furthermore, longrun gains take many years  a generation  to start getting realized. Among different sources of taxation, I find that, in the long run, complete reliance on a consumption tax dominates the current tax regime; however, the current tax regime dominates an exclusive labor income tax, which in turn is less welfarereducing than an exclusive capital income tax. These results are due to the fact that taxes on labor income and capital income distort investment decisions in reproducible capital, i.e., human capital and physical capital, and therefore have cumulative effects that do not result from a tax on consumption. Unlike previous studies, I account for the welfare effects of transition using optimal nonlinear decision rules all along the transition path. Chapter 4 builds on Chapter 3 by further analyzing the superiority of a consumption tax. There is a longstanding debate in the literature on the choice between consumption or expenditure taxes versus capital income taxes that goes back to Thomas Hobbes (1651), Mill (1871) and later Kaldor (1955) who advocated the consumption tax over the income tax. The advocacy of consumption tax has its solid empirical evidence as some studies indicated that the tax revenue collected in the United States includes a relatively small contribution coming from capital tax (Roger Gordon, Laura Kalambokidis, Jeffrey Rohaly and Joel Slemrod (2004)). This paper examines tax policy in an endogenous growth model with a public capital externality, where human capital serves as the engine of growth. In Chapter 3, this model was calibrated to the U.S. economy and experiments were run to calculate welfare gains from a shift in the fiscal regime from the current mix of capital income, labor income, and consumption taxes to complete reliance on consumption tax. In those experiments, government expenditures in public capital as a share of output was held fixed. The paper showed that the consumptiononly tax regime was superior to the current tax regime and to other tax regimes relying solely on a single source of taxation. In this paper, the government tax revenues as a portion of output are varied in order to find the optimal level of investments in public capital under a consumptiononly tax regime. I find that in the presence of a significant externality, a modest increase in the consumption tax with a greater investment in public capital can increase welfare. I also show that a slight shift in taxes from consumption to capital income can be welfare improving if the externality is high enough.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd0264
 Format
 Thesis
 Title
 Monte Carlo and QuasiMonte Carlo Methods in Financial Derivative Pricing.
 Creator

Göncü, Ahmet, Okten, Giray, Huffer, Fred, Ewald, Brian, Kercheval, Alec N., Mascagni, Michael, Department of Mathematics, Florida State University
 Abstract/Description

In this dissertation, we discuss the generation of low discrepancy sequences, randomization of these sequences, and the transformation methods to generate normally distributed random variables. Two well known methods for generating normally distributed numbers are considered, namely; BoxMuller and inverse transformation methods. Some researchers and financial engineers have claimed that it is incorrect to use the BoxMuller method with lowdiscrepancy sequences, and instead, the inverse...
Show moreIn this dissertation, we discuss the generation of low discrepancy sequences, randomization of these sequences, and the transformation methods to generate normally distributed random variables. Two well known methods for generating normally distributed numbers are considered, namely; BoxMuller and inverse transformation methods. Some researchers and financial engineers have claimed that it is incorrect to use the BoxMuller method with lowdiscrepancy sequences, and instead, the inverse transformation method should be used. We investigate the sensitivity of various computational finance problems with respect to different normal transformation methods. BoxMuller transformation method is theoretically justified in the context of the quasiMonte Carlo by showing that the same error bounds apply for BoxMuller transformed point sets. Furthermore, new error bounds are derived for financial derivative pricing problems and for an isotropic integration problem where the integrand is a function of the Euclidean norm. Theoretical results are derived for financial derivative pricing problems; such as European call, Asian geometric, and Binary options with a convergence rate of 1/N. A stratified BoxMuller algorithm is introduced as an alternative to BoxMuller and inverse transformation methods, and new numerical evidence is presented in favor of this method. Finally, a statistical test for pseudorandom numbers is adapted for measuring the uniformity of transformed low discrepancy sequences.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd4144
 Format
 Thesis
 Title
 Asset Market Dynamics of Heterogeneous Agent Models with Learning.
 Creator

Guan, Yuanying, Beaumont, Paul M., Kercheval, Alec N., Marquis, Milton, MestertonGibbons, Mike, Nichols, Warren D., Department of Mathematics, Florida State University
 Abstract/Description

The standard Lucas asset pricing model makes two common assumptions of homogeneous agents and rational expectations equilibrium. However, these assumptions are unrealistic for real financial markets. In this work, we relax these assumptions and establish a Lucas type agentbased asset pricing model. We create an artificial economy with a single risky asset and populate it with heterogeneous, boundedly rational, utility maximizing, infinitely lived and forward looking agents. We restrict...
Show moreThe standard Lucas asset pricing model makes two common assumptions of homogeneous agents and rational expectations equilibrium. However, these assumptions are unrealistic for real financial markets. In this work, we relax these assumptions and establish a Lucas type agentbased asset pricing model. We create an artificial economy with a single risky asset and populate it with heterogeneous, boundedly rational, utility maximizing, infinitely lived and forward looking agents. We restrict agents' information by allowing them to use only available information when they make optimal choices. With independent, identically distributed market returns, agents are able to compute their policy functions and the equilibrium pricing function with Duffie's method (Duffie, 1988) without perfect information about the market. When agents are out of equilibrium, they simultaneously compute their policy functions with predictive pricing functions and use adaptive learning schemes to learn the motion of the correct pricing function. Agents are able to learn the correct equilibrium pricing function with certain risk and learning parameters. In some other cases, the market price has excess volatility and the trading volume is very high. Simulations of the market behavior show rich dynamics, including a whole cascade from period doubling bifurcations to chaos. We apply the full families theory (De Melo and Van Strien, 1993) to prove that the rich dynamics do not come from numerical errors but are embedded in the structure of our dynamical system.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd3938
 Format
 Thesis
 Title
 Radically Elementary Stochastic Summation with Applications to Finance.
 Creator

Zhu, Ming, Nichols, Warren D., Kim, Kyounghee, Huﬀer, Fred W., Ewald, Brian, Kercheval, Alec N., Okten, Giray, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation develops a nonstandard approach to probability, stochastic calculus and financial modeling, within the framework of the Radically Elementary Probability Theory of Edward Nelson. The fundamental objects of investigation are stochastic sums with respect to a martingale, defined on a finite probability space and indexed by a finite set. We study the external (nonstandard) properties of these sums, such as almost sure continuity of trajectories, the Lp property, and the...
Show moreThis dissertation develops a nonstandard approach to probability, stochastic calculus and financial modeling, within the framework of the Radically Elementary Probability Theory of Edward Nelson. The fundamental objects of investigation are stochastic sums with respect to a martingale, defined on a finite probability space and indexed by a finite set. We study the external (nonstandard) properties of these sums, such as almost sure continuity of trajectories, the Lp property, and the Lindeberg condition; we also study external properties of related processes, such as quadratic variation and proper time. Using the tools so developed, we obtain an ItôDoeblin formula for change of variable and a Girsanov theorem for change of measure in a quite general setting. We also obtain results that will aid us in the comparison of certain of the processes we investigate to conventional ones. We illustrate the theory by using general techniques to build stock models driven by Wiener walks, Poisson walks and their combinations, and show in each case that when our parameter processes are constant we recover the prices for European calls of the corresponding models that use conventional stochastic calculus. Finally, we exhibit a model driven by a nonstandard Wiener process that produces different prices for European calls than are given by the conventional BlackScholes model.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9125
 Format
 Thesis
 Title
 Asymptotic Behaviour of Convection in Porous Media.
 Creator

Parshad, Rana Durga, Wang, Xiaoming, Ye, Ming, Case, Bettye Anne, Ewald, Brian, N.Kercheval, Alec, Nolder, Craig, Department of Mathematics, Florida State University
 Abstract/Description

This dissertation investigates asymptotic behaviour of convection in a fluid saturated porous medium. We analyse the DarcyBoussinesq system under perturbation of the DarcyPrandtl number parameter. In very tightly packed media this parameter is of very large order and can be driven to infinity to yield the infinite DarcyPrandtl number model. We show convergence of global attractors and invariant measures of the DarcyBoussinesq system to that of the infinite DarcyPrandtl number model with...
Show moreThis dissertation investigates asymptotic behaviour of convection in a fluid saturated porous medium. We analyse the DarcyBoussinesq system under perturbation of the DarcyPrandtl number parameter. In very tightly packed media this parameter is of very large order and can be driven to infinity to yield the infinite DarcyPrandtl number model. We show convergence of global attractors and invariant measures of the DarcyBoussinesq system to that of the infinite DarcyPrandtl number model with respect to perturbation of the DarcyPrandtl number parameter.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd2182
 Format
 Thesis
 Title
 Exponential Convergence Fourier Method and Its Application to Option Pricing with Lévy Processes.
 Creator

Gu, Fangxi, Nolder, Craig, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren D., Ökten, Giray, Florida State University, College of Arts and Sciences,...
Show moreGu, Fangxi, Nolder, Craig, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren D., Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

Option pricing by the Fourier method has been popular for the past decade, many of its applications to Lévy processes has been applied especially for European options. This thesis focuses on exponential convergence Fourier method and its application to discrete monitoring options and Bermudan options. An alternative payoff truncating method is derived to compare the benchmark Hilbert transform. A general error control framework is derived to keep the Fourier method out of an overflow problem....
Show moreOption pricing by the Fourier method has been popular for the past decade, many of its applications to Lévy processes has been applied especially for European options. This thesis focuses on exponential convergence Fourier method and its application to discrete monitoring options and Bermudan options. An alternative payoff truncating method is derived to compare the benchmark Hilbert transform. A general error control framework is derived to keep the Fourier method out of an overflow problem. Numerical results verify that the alternative payoff truncating sinc method performs better than the benchmark Hilbert transform method under the error control framework.
Show less  Date Issued
 2016
 Identifier
 FSU_FA2016_Gu_fsu_0071E_13579
 Format
 Thesis
 Title
 Statistical Analysis on Object Spaces with Applications.
 Creator

Yao, Kouadio David, Patrangenaru, Victor, Kercheval, Alec N., Liu, Xiuwen, Mio, Washington, Wang, Xiaoming, Florida State University, College of Arts and Sciences, Department of...
Show moreYao, Kouadio David, Patrangenaru, Victor, Kercheval, Alec N., Liu, Xiuwen, Mio, Washington, Wang, Xiaoming, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

Most of the data encountered is bounded nonlinear data. The Universe is bounded, planets are sphere like shaped objects, and life growing on Earth comes in various shapes and colors that can hardly be represented as points on a linear space, and even if the object space they sit on is embedded in a Euclidean space, their mean vector can not be represented as a point on that object space, except for the case when such space is convex. To address this misgiving, since the mean vector is the...
Show moreMost of the data encountered is bounded nonlinear data. The Universe is bounded, planets are sphere like shaped objects, and life growing on Earth comes in various shapes and colors that can hardly be represented as points on a linear space, and even if the object space they sit on is embedded in a Euclidean space, their mean vector can not be represented as a point on that object space, except for the case when such space is convex. To address this misgiving, since the mean vector is the minimizer of the expected square distance, following Fr\'echet (1948), on a compact metric space, one may consider both minimizers and maximizers of the expected square distance to a given point on the object space as mean, respectively {\bf antimean} of a given random point. Of all distances on a object space, one considers here the chord distance associated with an embedding of the object space, since for such distances one can give a necessary and sufficient condition for the existence of a unique Fr\'echet mean (respectively Fr\'echet antimean). For such distributions these location parameters are called extrinsic mean (respectively extrinsic antimean), and the corresponding sample statistics are consistent estimators of their population counterparts. Moreover one derives the limit distribution of such estimators around a mean located at a smooth extrinsic antimean. Extrinsic analysis is thus a general framework that allows one to run object data analysis on nonlinear object spaces that can be embedded in a numerical space. In particular one focuses on VeroneseWhitney (VW) means and antimeans of 3D projective shapes of configurations extracted from digital camera images. The 3D data extraction is greatly simplified by an RGB based algorithm followed by the FaugerasHartleyGuptaChen 3D reconstruction method. In particular one derives two sample tests for face analysis based on projective shapes, and more generally a MANOVA on manifolds method to be used in 3D projective shape analysis. The manifold based approach is also applicable to financial data analysis for exchange rates.
Show less  Date Issued
 2016
 Identifier
 FSU_FA2016_Yao_fsu_0071E_13605
 Format
 Thesis
 Title
 Modeling Credit Risk in the Default Threshold Framework.
 Creator

Chiu, ChunYuan, Kercheval, Alec N., Chicken, Eric, Ökten, Giray, Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

The default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the...
Show moreThe default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the survival probability can be derived as an expectation. How to specify the stochastic default threshold so that this expectation can be obtained in closed form is however left unanswered. The purpose of this thesis is to fulfill this gap. In this thesis, three credit risk models with stochastic default thresholds are proposed, under each of which the closed form default time distribution is derived. Unlike Garreau and Kercheval's work where the logreturn of a company's stock price is assumed to be independent and identically distributed and the interest rate is assumed constant, in our new proposed models the random interest rate and the stochastic volatility of a company's stock price are taken into consideration. While in some cases the defaultable bond price, the credit spread and the CDS premium are derived in closed form under the new proposed models, in others it seems not so easy. The difficulty that stops us from getting closed form formulas is also discussed in this thesis. Our new models involve the Heston model, which has a closed form characteristic function. We found the common characteristic function formula used in the literature not always applicable for all input variables. In this thesis the safe region of the formula is analyzed completely. A new formula is also derived that can be used to find the characteristic function value in some cases when the common formula is not applicable. An example is given where the common formula fails and one should use the new formula.
Show less  Date Issued
 2016
 Identifier
 FSU_FA2016_Chiu_fsu_0071E_13584
 Format
 Thesis
 Title
 Financial Assets in a Heterogeneous Agent General Equilibrium Model with Aggregate and Idiosyncratic Risk.
 Creator

Schmerbeck, Aaron J., Beaumont, Paul M., Kercheval, Alec N., Nolder, Craig, Marquis, Milton, Schlagenhauf, Don, Department of Economics, Florida State University
 Abstract/Description

The financial economics profession has determined that identical agents in a dynamic, stochastic, general equilibrium (DSGE) model does not provide price and trading dynamics realized in financial markets. There has been quite a bit of research over the last three decades extending heterogeneity to the Lucas asset pricing framework, to address this issue. Once the assumption of homogeneous agents is relaxed, the problem becomes increasingly complex due to a state space including the wealth...
Show moreThe financial economics profession has determined that identical agents in a dynamic, stochastic, general equilibrium (DSGE) model does not provide price and trading dynamics realized in financial markets. There has been quite a bit of research over the last three decades extending heterogeneity to the Lucas asset pricing framework, to address this issue. Once the assumption of homogeneous agents is relaxed, the problem becomes increasingly complex due to a state space including the wealth distribution, continuation utilities, and wealth distribution dynamics. To establish a more computationally feasible model, specical modifications have been made such as heterogeneity in idiosyncratic shocks and not risk aversion, including aggregate or idiosyncratic risk (but not both), or assuming no growth in the economy (steady state). In this research, I will define a DSGE model with heterogeneous agents. This heterogeneity will refer to differing CRRA utilities through risk aversion. The economy will have growth due to the assumed dividend process. Agents will face idiosyncratic and aggregate shocks in a complete markets setting. The framework of the provided algorithm will enable issues to be addressed beyond homogeneous agent models. The numerical simulation results of this model provide considerable asset price volatility and high trading volume. These results occur even in the complete markets setting, where investors are expected to fully insure. Given these dynamics from the simulations of the algorithm, I demonstrate the ability to calibrate this model to address specific financial economic issues, such as the equity premium puzzle. More importantly this exercise will assume realistic agent parameters of risk aversion and discount factors, relative to economic theory.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9088
 Format
 Thesis
 Title
 GPU Computing in Financial Engineering.
 Creator

Xu, Linlin, Ökten, Giray, Sinha, Debajyoti, Bellenot, Steven F., Gallivan, Kyle A., Kercheval, Alec N., Florida State University, College of Arts and Sciences, Department of...
Show moreXu, Linlin, Ökten, Giray, Sinha, Debajyoti, Bellenot, Steven F., Gallivan, Kyle A., Kercheval, Alec N., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speedup examples reported in the literature often involve Monte Carlo algorithms, and there are...
Show moreGPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speedup examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasiMonte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in NVIDIA's GPU CURAND libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasiMonte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities. We then develop a cacheaware implementation of a 3D parabolic PDE solver on GPU. We apply the wellknown CraigSneyd scheme and derive the corresponding discretization. We discuss memory hierarchy of GPU and suggest a data structure that is suitable for GPU's caching system. We compare the performance of the PDE solver on CPU and GPU. Finally, we consider sensitivity analysis for financial problems via Monte Carlo and PDE methods. We review three commonly used methods and point out their advantages and disadvantages. We present a survey of automatic differentiation (AD), and show the challenges faced in memory consumption when AD is applied in financial problems. We discuss two optimization techniques that help reduce memory footprint significantly. We conduct the sensitivity analysis for the LIBOR market model and suggest an optimization for its AD implementation on GPU. We also apply AD to a 3D parabolic PDE and use GPU to reduce the execution time.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9526
 Format
 Thesis
 Title
 Modelling Limit Order Book Dynamics Using Hawkes Processes.
 Creator

Chen, Yuanda, Kercheval, Alec N., Beaumont, Paul M., Ewald, Brian D., Zhu, Lingjiong, Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

The Hawkes process serves as a natural choice for modeling selfexciting dynamics, such as the behavior of an electronic exchangehosted limit order book (LOB). However, due to the lack of analytical solutions, probability estimates of future events often must rely on Monte Carlo simulation. Although Monte Carlo simulation is known to be good at solving pathdependent problems, it has the limitation that a high computation time is often required to get good accuracy. This is a concern in...
Show moreThe Hawkes process serves as a natural choice for modeling selfexciting dynamics, such as the behavior of an electronic exchangehosted limit order book (LOB). However, due to the lack of analytical solutions, probability estimates of future events often must rely on Monte Carlo simulation. Although Monte Carlo simulation is known to be good at solving pathdependent problems, it has the limitation that a high computation time is often required to get good accuracy. This is a concern in fields like algorithmic trading where fast calculation is essential. In this dissertation we propose the use of a 4dimensional Hawkes process to model the LOB and to forecast midprice movement probabilities using Monte Carlo simulation. We study the feasibility of making this prediction quickly enough to be applicable in practice. We show that fast predictions are feasible, and show in tests on real data that the model has some trading value in forecasting midprice movements. This dissertation also compares the performance of several popular computer languages, Python, MATLAB, Cython and C, in singlecore experiments, and examines the scalability for parallel computing using Cython and C.
Show less  Date Issued
 2017
 Identifier
 FSU_2017SP_Chen_fsu_0071E_13187
 Format
 Thesis
 Title
 Random Sobol' Sensitivity Analysis and Model Robustness.
 Creator

Mandel, David, Ökten, Giray, Hussaini, M. Yousuff, Huffer, Fred W. (Fred William), Kercheval, Alec N., Fahim, Arash, Florida State University, College of Arts and Sciences,...
Show moreMandel, David, Ökten, Giray, Hussaini, M. Yousuff, Huffer, Fred W. (Fred William), Kercheval, Alec N., Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

This work develops both the theoretical foundation and the practical application of random Sobol' analysis with two goals. The first is to provide a more general and accommodating approach to global sensitivity analysis, in which the parameter distribution themselves contain uncertainty, and hence the sensitivity results are random quantities as well. The framework for this approach is motivated by empirical evidence of such behavior, and examples of this behavior in interest rate and...
Show moreThis work develops both the theoretical foundation and the practical application of random Sobol' analysis with two goals. The first is to provide a more general and accommodating approach to global sensitivity analysis, in which the parameter distribution themselves contain uncertainty, and hence the sensitivity results are random quantities as well. The framework for this approach is motivated by empirical evidence of such behavior, and examples of this behavior in interest rate and temperature modeling are provided. The second goal is to compare competing models on their robustness, a notion developed and defined to provide a quantitative solution to model selection based on model uncertainty and sensitivity
Show less  Date Issued
 2017
 Identifier
 FSU_2017SP_Mandel_fsu_0071E_13682
 Format
 Thesis
 Title
 GameTheoretic Models of Animal Behavior Observed in Some Recent Experiments.
 Creator

Dai, Yao, MestertonGibbons, Mike, Hurdal, Monica K., Kercheval, Alec N., Quine, J. R. (John R.), Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

In this dissertation, we create three theoretical models to answer questions raised by recent experiments that lie beyond the scope of current theory. In the landmarkeffect model, we determine size, shape and location for a territory that is optimal in the sense of minimizing defense costs, when a given proportion of the boundary is landmarked and its primary benefit in terms of fitness is greater ease of detecting intruders across it. In the subjectiveresourcevalue model, we develop a...
Show moreIn this dissertation, we create three theoretical models to answer questions raised by recent experiments that lie beyond the scope of current theory. In the landmarkeffect model, we determine size, shape and location for a territory that is optimal in the sense of minimizing defense costs, when a given proportion of the boundary is landmarked and its primary benefit in terms of fitness is greater ease of detecting intruders across it. In the subjectiveresourcevalue model, we develop a gametheoretic model based on the WarofAttrition game. Our results confirm that allowing players to adapt their subjective resource value based on their experiences can generate strong winner effects with weak or even no loser effects, which is not predicted by other theoretical models. In the rearguardaction model, we develop two versions of a gametheoretic model with different hypotheses on the function of volatile chemical emissions in animal contests, and we compare their results with observations in experiments. The two hypotheses are whether volatile chemicals are released to prevent the winner of the current round of contest from translating its victory into permanent possession of a contested resource, or are used to prevent a winner from inflicting costs on a fleeing loser.
Show less  Date Issued
 2017
 Identifier
 FSU_2017SP_Dai_fsu_0071E_13762
 Format
 Thesis