Current Search: Research Repository (x) » Statistics (x) » text (x)
Search results
Pages
 Title
 Analysis of crossclassified data using negative binomial models.
 Creator

Ramakrishnan, Viswanathan., Florida State University
 Abstract/Description

Several procedures are available for analyzing crossclassified data under the Poisson model. When data suggest the presence of "nonPoisson" variation an alternative model is desirable. Often a negative binomial model is useful as an alternative. In this dissertation methodology for analyzing data under a twoparameter negative binomial model is provided. A conditional likelihood approach is suggested to simplify estimation and inference procedures. Large sample properties of the conditional...
Show moreSeveral procedures are available for analyzing crossclassified data under the Poisson model. When data suggest the presence of "nonPoisson" variation an alternative model is desirable. Often a negative binomial model is useful as an alternative. In this dissertation methodology for analyzing data under a twoparameter negative binomial model is provided. A conditional likelihood approach is suggested to simplify estimation and inference procedures. Large sample properties of the conditional likelihood approach are derived. Based on simulations these properties are examined for small samples. The suggested methodology is applied to two sets of data from ecological research studies.
Show less  Date Issued
 1989, 1989
 Identifier
 AAI9016503, 3161994, FSDT3161994, fsu:78193
 Format
 Document (PDF)
 Title
 Bayesian nonparametric estimation via Gibbs sampling for coherent systems with redundancy.
 Creator

Lawson, Kevin Lee., Florida State University
 Abstract/Description

We consider a coherent system S consisting of m independent components for which we do not know the distributions of the components' lifelengths. If we know the structure function of the system, then we can estimate the distribution of the system lifelength by estimating the distributions of the lifelengths of the individual components. Suppose that we can collect data under the 'autopsy model', wherein a system is run until a failure occurs and then the status (functioning or dead) of each...
Show moreWe consider a coherent system S consisting of m independent components for which we do not know the distributions of the components' lifelengths. If we know the structure function of the system, then we can estimate the distribution of the system lifelength by estimating the distributions of the lifelengths of the individual components. Suppose that we can collect data under the 'autopsy model', wherein a system is run until a failure occurs and then the status (functioning or dead) of each component is obtained. This test is repeated n times. The autopsy statistics consist of the age of the system at the time of breakdown and the set of parts that are dead by the time of breakdown. Using the structure function and the recorded status of the components, we then classify the failure time of each component. We develop a nonparametric Bayesian estimate of the distributions of the component lifelengths and then use this to obtain an estimate of the distribution of the lifelength of the system. The procedure is applicable to machinetest settings wherein the machines have redundant designs. A parametric procedure is also given.
Show less  Date Issued
 1994, 1994
 Identifier
 AAI9502812, 3088467, FSDT3088467, fsu:77272
 Format
 Document (PDF)
 Title
 BAYESIAN SOLUTIONS TO SOME CLASSICAL PROBLEMS OF STATISTICS.
 Creator

PEREIRA, CARLOS ALBERTO DE BRAGANCA., Florida State University
 Abstract/Description

Three of the basic questions of Statistics may be stated as follows: (A) Which portion of the data X is actually informative about the parameter of interest (theta)? (B) How can all the relevant information about (theta) provided by the data X be extracted? (C) What kind of information about (theta) do the data X possess?, The perspective of this dissertation is that of a Bayesian., Chapter I is essentially concerned with question A. The theory of conditional independence is explained and the...
Show moreThree of the basic questions of Statistics may be stated as follows: (A) Which portion of the data X is actually informative about the parameter of interest (theta)? (B) How can all the relevant information about (theta) provided by the data X be extracted? (C) What kind of information about (theta) do the data X possess?, The perspective of this dissertation is that of a Bayesian., Chapter I is essentially concerned with question A. The theory of conditional independence is explained and the relations between ancillarity, sufficiency, and statistical independence are discussed in depth. Some related concepts like specific sufficiency, bounded completeness, and splitting sets are also studied in some details. The language of conditional independence is used in the remaining Chapters., Chapter II deals with question B for the particular problem of analysing categorical data with missing entries. It is demonstrated how a suitably chosen prior for the frequency parameters can streamline the analysis in the presence of missing entries due to nonresponse or other causes. The two cases where the data follow the Multinomial or the Multivariate Hypergeometric model are treated separately. In the first case it is adequate to restrict the prior (for the cell probabilities) to the class of Dirichlet distributions. In the Hypergeometric case it is convenient to select a prior (for the cell population frequencies) from the class of DirichletMultinomial (DM) distributions. The DM distributions are studied in detail., Chapter III is directly related to question C. Conditions on the likelihood function and on the prior distribution are presented in order to assess the effect of the sample on the posterior distribution. More specifically, it is shown that under certain conditions, the larger the observations obtained, the larger (stochastically in terms of the posterior distribution) is the appropriate parameter., Finally, Chapter IV deals with the characterization of distributions in terms of Blackwell comparison of experiments. It is shown that a result (for the Hypergeometric model) obtained in Chapter II is actually a consequence of a property of complete families of distributions.
Show less  Date Issued
 1980, 1980
 Identifier
 AAI8108380, 3084857, FSDT3084857, fsu:74358
 Format
 Document (PDF)
 Title
 A comparison of robust and least squares regression models using actual and simulated data.
 Creator

Gilbert, Scott Alan., Florida State University
 Abstract/Description

The purpose of this study was to compare several robust regression techniques to ordinary least squares (OLS) regression when analyzing bivariate and multivariate data. The bivariate analysis compared of the performance of alternative robust procedures in regard to the detection of outliers versus the standard OLS regression techniques. The bivariate analysis demonstrated the weaknesses of OLS regression and the standard OLS outlier diagnostic techniques when multiple outliers are present. In...
Show moreThe purpose of this study was to compare several robust regression techniques to ordinary least squares (OLS) regression when analyzing bivariate and multivariate data. The bivariate analysis compared of the performance of alternative robust procedures in regard to the detection of outliers versus the standard OLS regression techniques. The bivariate analysis demonstrated the weaknesses of OLS regression and the standard OLS outlier diagnostic techniques when multiple outliers are present. In addition, this research assessed the empirical performance of alpha and power under three nonnormal probability density functions using a Monte Carlo simulation., The first analysis focused on several bivariate data sets. Each data set was plotted and each of the regression models used to analyze the data. The usual results (e.g., R$\sp2$, regression coefficients, standard errors, and regression diagnostics) were examined to give a visual as well as empirical analysis of the models' performance in the presence of multiple outliers., The second component of this study entailed a Monte Carlo simulation of five robust regression models and OLS regression under four probability density functions. The variables included in the study were placed in one 2$\sp1$3$\sp2$ and two 3$\sp2$ factorial design repeated over four probability density functions, resulting in a total of 90 experimental runs of the Monte Carlo simulation. Random samples were generated and then transformed to fit desired distributional moment characteristics. The incremental null hypothesis was used as the basis to calculate empirical alpha and power values calculated., The analysis demonstrated the inadequacies of the standard OLS based outlier detection methods and explained how regression analysis could be improved if a robust regression method is used in parallel with OLS regression. The multivariate analysis demonstrated the robustness of the OLS regression model to three nonnormal populations. It further demonstrated a moderate inflation of alpha for the Mclass of robust regression model and a lack of power stability with the rank transform regression method., Based on the results of this study, recommendations were made for using robust regression methods and suggestions for future research offered.
Show less  Date Issued
 1992, 1992
 Identifier
 AAI9222385, 3087822, FSDT3087822, fsu:76632
 Format
 Document (PDF)
 Title
 THE COMPARISON OF SENSITIVITIES OF EXPERIMENTS (MAXIMUM LIKELIHOOD, RANDOM, FIXED, ANALYSIS OF VARIANCE).
 Creator

YOUNG, BARBARA NELSON., Florida State University
 Abstract/Description

The sensitivity of a measurement technique is defined to be its ability to detect differences among the treatments in a fixed effects design, or the presence of a between treatments component of variance in a random effects design. Consider an experiment, consisting of two identical subexperiments, designed specifically for the purpose of comparing two measurement techniques. It is assumed that the techniques of analysis of variance are applicable in analyzing the data obtained from the two...
Show moreThe sensitivity of a measurement technique is defined to be its ability to detect differences among the treatments in a fixed effects design, or the presence of a between treatments component of variance in a random effects design. Consider an experiment, consisting of two identical subexperiments, designed specifically for the purpose of comparing two measurement techniques. It is assumed that the techniques of analysis of variance are applicable in analyzing the data obtained from the two measurement techniques. The subexperiments may have either fixed or random treatment effects in either oneway or general block designs. It is assumed that the experiment yields bivariate observations from the two measurement methods which may or may not be independent. Likelihood ratio tests are used in the various settings of this dissertation to both extend current techniques and provide alternative methods for comparing the sensitivities of experiments.
Show less  Date Issued
 1985, 1985
 Identifier
 AAI8524629, 3086182, FSDT3086182, fsu:75665
 Format
 Document (PDF)
 Title
 A comparison of two methods of bootstrapping in a reliability model.
 Creator

Chiang, YuangChin., Florida State University
 Abstract/Description

We consider bootstrapping in the following reliability model which was considered by Doss, Freitag, and Proschan (1987). Available for testing is a sample of iid systems each having the same structure of m independent components. Each system is continuously observed until it fails. For every component in each system, either a failure time or a censoring time is recorded. A failure time is recorded if the component fails before or at the time of system failure; otherwise a censoring time is...
Show moreWe consider bootstrapping in the following reliability model which was considered by Doss, Freitag, and Proschan (1987). Available for testing is a sample of iid systems each having the same structure of m independent components. Each system is continuously observed until it fails. For every component in each system, either a failure time or a censoring time is recorded. A failure time is recorded if the component fails before or at the time of system failure; otherwise a censoring time is recorded. To estimate the distribution of the component lifelengths F$\sb1,\...$,F$\sb{\rm m}$, one can formally compute the KaplanMeier estimates F$\sb1,\...$,F$\sb{\rm m}$. Various quantities of interest, such as the probability that a new system will survive time t$\sb0$, may then be estimated by combining F$\sb1,\...$,F$\sb{\rm m}$ in a suitable way. In this model, bootstrapping can be carried out in two different ways. One can resample n systems at random from the original n systems. Alternatively, one can construct artificial systems by generating independent random lifelengths from the KaplanMeier estimates F$\sb{\rm j}$, and from those form artificial data. The two methods are distinct. We show that asymptotically, bootstrapping by either method yields correct answers. We also compare the two methods via simulation studies.
Show less  Date Issued
 1988, 1988
 Identifier
 AAI8906216, 3161719, FSDT3161719, fsu:77918
 Format
 Document (PDF)
 Title
 The computation of probabilities which involve spacings, with applications to the scan statistic.
 Creator

Lin, ChienTai., Florida State University
 Abstract/Description

We develop a methodology for evaluating probabilities which involve linear combinations of spacings and then present some applications of this methodology. The basic idea underlying our method was given by Huffer (1988): A recursion is used to break up the joint distribution of several linear combinations of spacings into a sum of simpler components. The same recursion is then applied to each of these components and so on. The process is continued until we obtain components which are simple...
Show moreWe develop a methodology for evaluating probabilities which involve linear combinations of spacings and then present some applications of this methodology. The basic idea underlying our method was given by Huffer (1988): A recursion is used to break up the joint distribution of several linear combinations of spacings into a sum of simpler components. The same recursion is then applied to each of these components and so on. The process is continued until we obtain components which are simple and easily expressed in closed form. We describe algorithms and a computer program (written in C) which implement this approach. Our approach has two advantages. First, it is fairly general and can be used to solve a variety of problems involving linear combinations of spacings. Secondly, because the output of our procedure is a polynomial whose coefficients are computed exactly, we can supply numerical answers which are accurate to any required degree of precision. We apply our program to compute the distribution of the scan statistic for small sample sizes. We also use the recursion and computer program to calculate the lower order moments of the number of clumps in randomly distributed points. We can use these moments to obtain bounds and approximations for the distribution of the scan statistic. Our approximations are based on fitting a compound Poisson distribution to the moments of the number of clumps.
Show less  Date Issued
 1993, 1993
 Identifier
 AAI9416150, 3088291, FSDT3088291, fsu:77095
 Format
 Document (PDF)
 Title
 Conditional bootstrap methods for censored data.
 Creator

Kim, JiHyun., Florida State University
 Abstract/Description

We first consider the random censorship model of survival analysis. The pairs of positive random variables ($X\sb{i},Y\sb{i}$), i = 1,$\...$,n, are independent and identically distributed, with distribution functions F(t) = P($X\sb{i} \leq\ t$) and G(t) = P($Y\sb{i} \leq\ t$) and the Y's are independent of the X's. We observe only ($T\sb{i},\delta\sb{i}$), i = 1,$\...$,n, where $T\sb{i}$ = min($X\sb{i},Y\sb{i}$) and $\delta\sb{i}$ = I($X\sb{i} \leq\ Y\sb{i}$). The X's represent survival times...
Show moreWe first consider the random censorship model of survival analysis. The pairs of positive random variables ($X\sb{i},Y\sb{i}$), i = 1,$\...$,n, are independent and identically distributed, with distribution functions F(t) = P($X\sb{i} \leq\ t$) and G(t) = P($Y\sb{i} \leq\ t$) and the Y's are independent of the X's. We observe only ($T\sb{i},\delta\sb{i}$), i = 1,$\...$,n, where $T\sb{i}$ = min($X\sb{i},Y\sb{i}$) and $\delta\sb{i}$ = I($X\sb{i} \leq\ Y\sb{i}$). The X's represent survival times, the Y's represent censoring times. Efron (1981) proposed two bootstrap methods for the random censorship model and showed that they are distributionally the same. Akritas (1986) established the weak convergence of the bootstrapped KaplanMeier estimator of F when bootstrapping is done by this method. Let us now consider bootstrapping more closely. Suppose that we wish to estimate the variance of F(t). If we knew the Y's then we would condition on them by the ancillarity principle, since the distribution of the Y's does not depend on F. That is, we would want to estimate Var$\{$F(t)$\vert Y\sb1,\...,Y\sb{n}\}$. Unfortunately, in the random censorship model we do not see all the Y's. If $\delta\sb{i}$ = 0 we see the exact value of $Y\sb{i}$, but if $\delta\sb{i}$ = 1 we know only that $Y\sb{i} > T\sb{i}$. Let us denote this information on the Y's by ${\cal C}$. Thus, what we want to estimate is Var$\{$F(t)$\vert{\cal C}\}$. Efron's scheme is appropriate for estimating the unconditional variance. We propose a new bootstrap method which provides an estimate of Var$\{$F(t)$\vert{\cal C}\}$., In this research we show that the KaplanMeier estimator of F formed by the new bootstrap method has the same limiting distribution as the one by Efron's approach. The results of simulation studies assessing the small sample performance of the two bootstrap methods are reported. We also consider the model in which the $X\sb{i}$'s are censored by the $Y\sb{i}$'s and also by known fixed constants, and propose an appropriate bootstrap method for that model. This bootstrap method is a readily modified version of the new bootstrap method above.
Show less  Date Issued
 1990, 1990
 Identifier
 AAI9113938, 3162201, FSDT3162201, fsu:78399
 Format
 Document (PDF)
 Title
 Contributions to the theory of arrangement increasing functions.
 Creator

Proschan, Michael Arthur., Florida State University
 Abstract/Description

A function $f(\underline{x})$ which increases each time we transpose an out of order pair of coordinates, $x\sb{j} > x\sb{k}$ for some $j x\sb{k}$ by transposing the two x coordinates. The theory of AI functions is tailor made for ranking and selection problems, in which case we assume that the density $f(\underline{\theta}$,$\underline{x})$ of observations with respective parameters $\theta\sb1, \..., \theta\sb{n}$ is AI, and the goal is to determine the largest or smallest parameters., In...
Show moreA function $f(\underline{x})$ which increases each time we transpose an out of order pair of coordinates, $x\sb{j} > x\sb{k}$ for some $j x\sb{k}$ by transposing the two x coordinates. The theory of AI functions is tailor made for ranking and selection problems, in which case we assume that the density $f(\underline{\theta}$,$\underline{x})$ of observations with respective parameters $\theta\sb1, \..., \theta\sb{n}$ is AI, and the goal is to determine the largest or smallest parameters., In this dissertation we present new applications of AI functions in such areas as biology and reliability, and we generalize the notion of AI functions. We consider multivector extensions, some with and one without respect to parameter vectors, and we connect these. Another generalization (TEGO) is motivated by the connection between total positivity (TP) and AI. TEGO results are shown to imply AI and TP results. We also define and develop a partial ordering on densities of rank vectors. The theory, which involves finding the extreme points of the convex set of AI rank densities, is then used to establish some power results of rank tests.
Show less  Date Issued
 1989, 1989
 Identifier
 AAI9002934, 3161869, FSDT3161869, fsu:78068
 Format
 Document (PDF)
 Title
 Cumulative regression function methods in survival analysis and time series.
 Creator

Zhang, MeiJie., Florida State University
 Abstract/Description

One may estimate a conditional hazard function from grouped (and possibly censored) survival data by the time and covariate specific occurrence/exposure rate. Asymptotic results for cumulative versions of this estimator are developed, utilizing the general framework of counting processes. In particular, a grouped data based goodnessoffit test for Cox's proportional hazard model is given. Various constraints on the asymptotic behavior of the widths of the calendar periods and covariate...
Show moreOne may estimate a conditional hazard function from grouped (and possibly censored) survival data by the time and covariate specific occurrence/exposure rate. Asymptotic results for cumulative versions of this estimator are developed, utilizing the general framework of counting processes. In particular, a grouped data based goodnessoffit test for Cox's proportional hazard model is given. Various constraints on the asymptotic behavior of the widths of the calendar periods and covariate strata employed in grouping the data are needed to prove the results. Actual performance of the estimators and test statistics is evaluated by Monte Carlo methods., We also consider the problem of identifying the class of time series model to which a series belongs based on observation of part of the series. Techniques of nonparametric estimation have been applied to this problem by Auestad and Tjostheim (Biometrika 77(1990):669687) who used kernel estimates of the onestep lagged conditional mean and variance functions. We study cumulative versions of such estimates. These are more stable than the kernel estimates and can be used to construct confidence bands for the underlying cumulative mean and variance functions. Goodnessoffit tests for specific parametric models are also developed.
Show less  Date Issued
 1991, 1991
 Identifier
 AAI9202323, 3087663, FSDT3087663, fsu:76478
 Format
 Document (PDF)
 Title
 DETERMINING A SUFFICIENT LEVEL OF INTERRATER RELIABILITY (POWER ANALYSIS, MISCLASSIFICATION, SAMPLE SIZE).
 Creator

RASP, JOHN M., Florida State University
 Abstract/Description

The reliability of a test or measurement procedure is, generally speaking, an index of the consistency of its results. Interrater reliability assesses the consistency of judgements among a set of raters. We model the observation taken on a subject by an unreliable procedure as the sum of a true score with mean (mu) and variance (sigma)(,T)('2) and an error term with mean 0 and variance (sigma)(,E)('2). The reliability coefficient then is (rho) = (sigma)(,T)('2)/((sigma)(,T)('2) + (sigma)(,E)...
Show moreThe reliability of a test or measurement procedure is, generally speaking, an index of the consistency of its results. Interrater reliability assesses the consistency of judgements among a set of raters. We model the observation taken on a subject by an unreliable procedure as the sum of a true score with mean (mu) and variance (sigma)(,T)('2) and an error term with mean 0 and variance (sigma)(,E)('2). The reliability coefficient then is (rho) = (sigma)(,T)('2)/((sigma)(,T)('2) + (sigma)(,E)('2))., The reliability of an instrument or rating procedure is generally evaluated in an initial experiment (or series of experiments) known as a "reliability study." Once an instrument is established as having some degree of reliability, it is then used as a measurement tool in subsequent research, known as "decision studies.", An unreliable procedure measures imperfectly. The impact of the error in measurement is investigated as it relates to three broad areas of statistical procedures: estimation, hypothesis testing, and decisionmaking., An unreliable measurement decreases the precision of estimates. The effect of an unreliable measurement on the width of a confidence interval for the population mean is examined. Also, an expression is developed to facilitate estimation of the reliability of a test or measurement in a decision study when the populations of interest may differ from those in the reliability study., An unreliable instrument weakens hypothesis tests. The extent to which lack of reliability attenuates the power of the twosample ttest, the Ftest in the analysis of variance, and the ttest for statistically significant correlation between two variables is investigated., An unreliable measurement engenders false classifications. A dichotomous decision is considered, and expressions for the probability of misclassifying a subject by a rating procedure with a given reliability are developed. Overall as well as directional misclassification rates are found under the model of true scores and errors distributed as independent normals. Effects of departures from this model, by heavytailed and skewed true score and error distributions, and by errors whose variance is a function of the true score, are considered. A general expression for this misclassification probability is found. A confidence interval for the misclassification probability is developed., These results provide tools for a researcher better to make decisions concerning the design of an experiment. They permit the costs of increased reliability to be more knowledgeably compared with the consequences of using an unreliable measurement procedure in a given situation.
Show less  Date Issued
 1984, 1984
 Identifier
 AAI8416723, 3085837, FSDT3085837, fsu:75324
 Format
 Document (PDF)
 Title
 Effects of inspection error on optimal inspection policies and software fault detection models.
 Creator

Herge, Donna Carol., Florida State University
 Abstract/Description

Inspection policies are essential for many types of systems for which the status (functioning or failed) can be determined only by actual inspection. Two types of inspection error may occur. A functioning system may be incorrectly assessed as having failed or a failed system may be incorrectly assessed as functioning. These errors are designated as Types I and II respectively, and their impact on optimal inspection policies and software fault detection models is analyzed. For a periodic...
Show moreInspection policies are essential for many types of systems for which the status (functioning or failed) can be determined only by actual inspection. Two types of inspection error may occur. A functioning system may be incorrectly assessed as having failed or a failed system may be incorrectly assessed as functioning. These errors are designated as Types I and II respectively, and their impact on optimal inspection policies and software fault detection models is analyzed. For a periodic inspection model with Type I error, an optimal replacement age is obtained, then monotonicity and asymptotic properties of the longrun expected cost per unit of time are presented. Type I error is incorporated into a cumulative damage model. When the failure density is reverse rule of order 2, an algorithm to compute an optimal inspection sequence is derived, and it is proven that optimal intervals are increasing. Extending the optimal inspection sequence model of Barlow, Hunter, and Proschan to include Type II inspection error, it is proven that optimal intervals are nonincreasing for a PF$\sb2$ density, and an algorithm to compute optimal intervals is derived. Additionally, monotonicity and majorization results are obtained for an optimal inspection sequence with Type II error. The impact of faultdetection error on a software optimal release time model is shown. The effect of fault diversity on the JelinskiMoranda model and how this relates to imperfect fault detection is demonstrated.
Show less  Date Issued
 1992, 1992
 Identifier
 AAI9306034, 3087973, FSDT3087973, fsu:76780
 Format
 Document (PDF)
 Title
 ESTIMATING JOINTLY SYSTEM AND COMPONENT RELIABILITIES USING A MUTUAL CENSORSHIP APPROACH (SURVIVAL ANALYSIS, COUNTING PROCESSES, MARTINGALES, KAPLANMEIER, RELIABILITY FUNCTION).
 Creator

FREITAG, STEVEN ARTHUR., Florida State University
 Abstract/Description

Let F denote the life distribution of a coherent structure of independent components. Suppose that we have a sample of independent systems, each having the structure (phi). Each system is continuously observed until it fails. For every component in each system, either a failure time or a censoring time is recorded. A failure time is recorded if the component fails before or at the time of system failure; otherwise a censoring time is recorded. We introduce a method for finding estimates for F...
Show moreLet F denote the life distribution of a coherent structure of independent components. Suppose that we have a sample of independent systems, each having the structure (phi). Each system is continuously observed until it fails. For every component in each system, either a failure time or a censoring time is recorded. A failure time is recorded if the component fails before or at the time of system failure; otherwise a censoring time is recorded. We introduce a method for finding estimates for F(t), quantiles, and other functionals of F, based on the censorship of the component lives by system failure. We present limit theorems that enable the construction of confidence intervals for large samples.
Show less  Date Issued
 1986, 1986
 Identifier
 AAI8609671, 3086298, FSDT3086298, fsu:75781
 Format
 Document (PDF)
 Title
 ESTIMATING MULTIDIMENSIONAL TABLES FROM SURVEY DATA: PREDICTING MAGAZINE AUDIENCES.
 Creator

DANAHER, PETER JOSEPH., Florida State University
 Abstract/Description

Suppose an advertiser constructs an advertising campaign by placing k advertisements in a magazine. He now estimates the proportion of the population which sees none, one, or up to all k advertisements (called the exposure distribution). Several criteria for evaluating the effectiveness of the campaign can be obtained directly from the exposure distribution. Two of them are reach, the proportion of the population which is exposed to at least one of the advertisements and effective reach, the...
Show moreSuppose an advertiser constructs an advertising campaign by placing k advertisements in a magazine. He now estimates the proportion of the population which sees none, one, or up to all k advertisements (called the exposure distribution). Several criteria for evaluating the effectiveness of the campaign can be obtained directly from the exposure distribution. Two of them are reach, the proportion of the population which is exposed to at least one of the advertisements and effective reach, the mean of the exposure distribution., We develop three exposure distribution models for the cases where advertising campaigns are comprised of one, two, or three or more magazines. The models build on each other in that the model for one magazine is used to improve the fit of the model for two magazines and the model for two magazines is used to estimate the parameters of the model for three or more magazines., A thorough empirical test, using the AGB:McNair "National Media Survey", shows that each of our models outperforms the best currentlyavailable models. In addition, the three models are proved to have optimal asymptotic properties., The models are used to select a media schedule which maximizes either reach or effective reach subject to a budget constraint. A monotonicity property of reach and effective reach yields an algorithm for optimizing both reach and effective reach that greatly reduces computation time over conventional methods used to solve integer programming problems., It is more useful to estimate the proportion of the population which sees the advertisements in a magazine rather than the proportion which sees the magazine. Often, however, no advertisement recall data is available so we are forced to estimate the proportion which is exposed to just the magazines. If advertisement recall data is available we give a natural and simple adjustment of the original magazine exposure data to get advertisement exposure data. Our models also give an excellent fit to these adjusted exposure data.
Show less  Date Issued
 1987, 1987
 Identifier
 AAI8721837, 3086665, FSDT3086665, fsu:76140
 Format
 Document (PDF)
 Title
 ESTIMATION AND PREDICTION FOR EXPONENTIAL TIME SERIES MODELS.
 Creator

MOHAMED, FOUAD YEHIA., Florida State University
 Abstract/Description

This work is concerned with the study of stationary time series models in which the marginal distribution of the observations follows an exponential distribution. This is in contrast to the standard models in the literature where the error sequence and hence the marginal distributions of the o
 Date Issued
 1981, 1981
 Identifier
 AAI8205698, 3085176, FSDT3085176, fsu:74671
 Format
 Document (PDF)
 Title
 Estimation and testing for some nonlinear time series biological population models.
 Creator

Lee, Sauchi Stephen., Florida State University
 Abstract/Description

Ecologists construct population models in order to understand the underlying dynamics involved in population growth processes. Among these is the wellknown Ricker model $X\sb{n + 1}$ = $X\sb{n}{\rm exp}(r(1X\sb{n}/K))$, where $X\sb{n}$ is the population size at the nth generation with growth rate $r >$ 0 and carrying capacity $K >$ 0. This is a deterministic model for singlespecies population growth. It can exhibit a remarkable spectrum of dynamical behavior, from stable equilibrium point...
Show moreEcologists construct population models in order to understand the underlying dynamics involved in population growth processes. Among these is the wellknown Ricker model $X\sb{n + 1}$ = $X\sb{n}{\rm exp}(r(1X\sb{n}/K))$, where $X\sb{n}$ is the population size at the nth generation with growth rate $r >$ 0 and carrying capacity $K >$ 0. This is a deterministic model for singlespecies population growth. It can exhibit a remarkable spectrum of dynamical behavior, from stable equilibrium point to stable cyclic oscillations, and finally to chaotic oscillations. However, neither stable equilibrium point nor stable cycles are attained in laboratory or field populations because of random environmental variations and measurement error. Based on the stochastic nature of the observed population size, we develop methods of analyzing some nonlinear time series models for population growth. The corresponding stochastic counterpart of the Ricker model is $X\sb{n + 1} = X\sb{n}\ {\rm exp}(\theta\sb1 + \theta\sb2X\sb{n} + \epsilon\sb{n}),$ where $\epsilon\sb{n}$'s are iid random variables with mean 0 and variance $\sigma\sp2$, with the parameters $\theta\sb1 > 0$ and $\theta\sb2 < 0.$ We further generalize the above model to, We estimate the parameters of these models by Conditional Least Squares. Consistency and asymptotic normality of the CLS estimators are established. Testing hypotheses on some subsets of the parameters is also included. Simulation studies suggest that the asymptotic results of most, but not all, of these models are applicable to small or moderate sample sizes. Finally, we apply the methodology to analyze some real data sets.
Show less  Date Issued
 1991, 1991
 Identifier
 AAI9208152, 3087708, FSDT3087708, fsu:76518
 Format
 Document (PDF)
 Title
 Estimation of the number of classes of objects through presence/absence data.
 Creator

Norris, James Lawrence, III., Florida State University
 Abstract/Description

This research involves the estimation of the total number of classes of objects in a region by sampling sectors or quadrats. For each selected quadrat, the classes are recorded. From these data, estimates and/or confidence limits for the number of classes in the region are developed. Models which differ in their methods of sampling (simple random sampling or stratified random sampling) and in their assumptions concerning the classes are investigated., We present three simple random sampling...
Show moreThis research involves the estimation of the total number of classes of objects in a region by sampling sectors or quadrats. For each selected quadrat, the classes are recorded. From these data, estimates and/or confidence limits for the number of classes in the region are developed. Models which differ in their methods of sampling (simple random sampling or stratified random sampling) and in their assumptions concerning the classes are investigated., We present three simple random sampling models: a mixture model, a Bayesian lower limit model, and a j$\sp{\rm th}$order bootstrap biascorrection model. For the mixture model, we develop an asymptotic confidence relation for the number of classes as well as discuss optimal sampling designs. For the next model, we obtain an asymptotic Bayesian lower limit for the expected number of unobserved classes with the limit being robust to the prior on $\theta$, the number of classes. Our j$\sp{\rm th}$order bootstrap biascorrected estimator of $\theta$ extends the (firstorder) bootstrap estimator reported by Smith and van Belle (1984)., Then we contrast stratified random sampling with simple random sampling and demonstrate that the expected number of observed classes can be greatly increased by stratification. We also extend some components of the simple random sampling models to stratified random sampling.
Show less  Date Issued
 1990, 1990
 Identifier
 AAI9100063, 3162082, FSDT3162082, fsu:78280
 Format
 Document (PDF)
 Title
 Estimation under censoring with missing failure indicators.
 Creator

Subramanian, Sundar., Florida State University
 Abstract/Description

The KaplanMeier estimator of a survival function is wellknown to be asymptotically efficient when cause of failure (censored or noncensored) is always observed. We consider the problem of finding an estimator when the failure indicators are missing completely at random. Under this assumption, it is known that the method of nonparametric maximum likelihood fails to work in this problem. We introduce a new estimator that is a smooth functional of the NelsonAalen estimators of certain...
Show moreThe KaplanMeier estimator of a survival function is wellknown to be asymptotically efficient when cause of failure (censored or noncensored) is always observed. We consider the problem of finding an estimator when the failure indicators are missing completely at random. Under this assumption, it is known that the method of nonparametric maximum likelihood fails to work in this problem. We introduce a new estimator that is a smooth functional of the NelsonAalen estimators of certain cumulative transition intensities. The asymptotic distribution of the estimator is derived using the functional delta method. Simulation studies reveal that this estimator competes well with the existing estimators. The idea is extended to the Cox model, and estimators are introduced for the regression parameter and the cumulative baseline hazard function.
Show less  Date Issued
 1995, 1995
 Identifier
 AAI9614520, 3088854, FSDT3088854, fsu:77653
 Format
 Document (PDF)
 Title
 Finite horizon singular control and a related twoperson game.
 Creator

Santana, Paulo Reinhardt., Florida State University
 Abstract/Description

We consider the finite horizon problem of tracking a Brownian Motion, with possibly non zero drift, by a process of bounded variation, in such a way as to minimize total expected cost of "action" and "deviation from a target state." The cost of "action" is given by two functions (of time), which represent price per unit of increase and decrease in the state process; the cost of "deviation" is incurred continuously at a rate given by a function convex in the state variable and a terminal cost...
Show moreWe consider the finite horizon problem of tracking a Brownian Motion, with possibly non zero drift, by a process of bounded variation, in such a way as to minimize total expected cost of "action" and "deviation from a target state." The cost of "action" is given by two functions (of time), which represent price per unit of increase and decrease in the state process; the cost of "deviation" is incurred continuously at a rate given by a function convex in the state variable and a terminal cost function. We obtain the optimal cost function for this problem, as well an $\varepsilon$optimal strategy, through the solution of a system of variational inequalities, which has a stochastic representation as the value function for an appropriate twoperson game.
Show less  Date Issued
 1988, 1988
 Identifier
 AAI8814430, 3086851, FSDT3086851, fsu:76324
 Format
 Document (PDF)