Current Search: Theses and Dissertations (x) » info:fedora/ir:thesisCModel (x) » MeyerBaese, Anke (x)
Search results
 Title
 Interactive 3D GPUBased Breast Mass Lesion Segmentation Method Based on Level Sets for DceMRI Images.
 Creator

Zavala Romero, Olmo S., MeyerBaese, Anke, Sussman, Mark, Erlebacher, Gordon, Slice, Dennis E., Wang, Xiaoqiang, Florida State University, College of Arts and Sciences,...
Show moreZavala Romero, Olmo S., MeyerBaese, Anke, Sussman, Mark, Erlebacher, Gordon, Slice, Dennis E., Wang, Xiaoqiang, Florida State University, College of Arts and Sciences, Department of Scientific Computing
Show less  Abstract/Description

A new method for the segmentation of 3D breast lesions in dynamic contrast enhanced magnetic resonance imaging (DCEMRI) images, using parallel programming with general purpose computing on graphics processing units (GPGPUs), is proposed. The method has two main parts: a preprocessing step and a segmentation algorithm. In the preprocessing step, DCEMRI images are registered using an intensitybased rigid transformation algorithm based on gradient descent. After the registration, voxels...
Show moreA new method for the segmentation of 3D breast lesions in dynamic contrast enhanced magnetic resonance imaging (DCEMRI) images, using parallel programming with general purpose computing on graphics processing units (GPGPUs), is proposed. The method has two main parts: a preprocessing step and a segmentation algorithm. In the preprocessing step, DCEMRI images are registered using an intensitybased rigid transformation algorithm based on gradient descent. After the registration, voxels that correspond to breast lesions are enhanced using the Naïve Bayes machine learning classifier. This classifier is trained to identify four different classes inside breast images: lesion, normal tissue, chest and background. Training is performed by manually selecting 150 voxels for each of the four classes from images in which breast lesions have been confirmed by an expert in the field. Thirteen attributes obtained from the kinetic curves of the selected voxels are later used to train the classifier. Finally, the classifier is used to increase the intensity values of voxels labeled as lesions and to decrease the intensities of all other voxels. The postprocessed images are used for volume segmentation of the breast lesions using a level set method based on the active contours without edges (ACWE) algorithm. The segmentation algorithm is implemented in OpenCL for the GPGPUs to accelerate the original model by parallelizing two main steps of the segmentation process: the computation of the signed distance function (SDF) and the evolution of the segmented curve. The proposed framework uses OpenGL to display the segmented volume in real time, allowing the physician to obtain immediate feedback on the current segmentation progress. The proposed implementation of the SDF is compared with an optimal implementation developed in Matlab and achieves speedups of 25 and 12 for 2D and 3D images, respectively. Moreover, the OpenCL implementation of the segmentation algorithm is compared with an optimal implementation of the narrowband ACWE algorithm. Peak speedups of 55 and 40 are obtained for 2D and 3D images, respectively. The segmentation algorithm has been developed as open source software, with different versions for 2D and 3D images, and can be used in different areas of medical imaging as well as in areas within computer vision, such like tracking, robotics and navigation.
Show less  Date Issued
 2015
 Identifier
 FSU_2015fall_ZavalaRomero_fsu_0071E_12893
 Format
 Set of related objects
 Title
 Parma: Applications of VectorAutoregressive Models to Biological Inference with an Emphasis on ProcrustesBased Data.
 Creator

Soda, K. James (Kenneth James), Slice, Dennis E., Beaumont, Paul M., Beerli, Peter, MeyerBaese, Anke, Shanbhag, Sachin, Florida State University, College of Arts and Sciences,...
Show moreSoda, K. James (Kenneth James), Slice, Dennis E., Beaumont, Paul M., Beerli, Peter, MeyerBaese, Anke, Shanbhag, Sachin, Florida State University, College of Arts and Sciences, Department of Scientific Computing
Show less  Abstract/Description

Many phenomena in ecology, evolution, and organismal biology relate to how a system changes through time. Unfortunately, most of the statistical methods that are common in these fields represent samples as static scalars or vectors. Since variables in temporallydynamic systems do not have stable values this representation is unideal. Differential equation and basis function representations provide alternative systems for description, but they are also not without drawbacks of their own....
Show moreMany phenomena in ecology, evolution, and organismal biology relate to how a system changes through time. Unfortunately, most of the statistical methods that are common in these fields represent samples as static scalars or vectors. Since variables in temporallydynamic systems do not have stable values this representation is unideal. Differential equation and basis function representations provide alternative systems for description, but they are also not without drawbacks of their own. Differential equations are typically outside the scope of statistical inference, and basis function representations rely on functions that solely relate to the original data in regards to qualitative appearance, not in regards to any property of the original system. In this dissertation, I propose that vector autoregressivemoving average (VARMA) and vector autoregressive (VAR) processes can represent temporallydynamic systems. Under this strategy, each sample is a time series, instead of a scalar or vector. Unlike differential equations, these representations facilitate statistical description and inference, and, unlike basis function representations, these processes directly relate to an emergent property of dynamic systems, their crosscovariance structure. In the first chapter, I describe how VAR representations for biological systems lead to both a metric for the difference between systems, the Euclidean process distance, and to a statistical test to assess whether two time series may have originated from a single VAR process, the likelihood ratio test for a common process. Using simulated time series, I demonstrate that the likelihood ratio test for a common process has a true Type I error rate that is close to the prespecified nominal error rate, regardless of the number of subseries in the system or of the order of the processes. Further, using the Euclidean process distance as a measure of difference, I establish power curves for the test using logistic regression. The test has a high probability of rejecting a false null hypothesis, even for modest differences between series. In addition, I illustrate that if two competitors follow the LotkaVolterra equations for competition with some additional white noise, the system deviates from VAR assumptions. Yet, the test can still differentiate between a simulation based on these equations in which the constraints on the system change and a simulation where the constraints do not change. Although the Type I error rate is inflated in this scenario, the degree of inflation does not appear to be larger when the system deviates more noticeably from model assumptions. In the second chapter, I investigate the likelihood ratio test for a common process's performance with shape trajectory data. Shape trajectories are an extension of geometric morphometric data in which a sample is a set of temporallyordered shapes as opposed to a single static shape. Like all geometric morphometric data, each shape in a trajectory is inherently highdimensional. Since the number of parameters in a VAR representation grows quadratically with the number of subseries, shape trajectory data will often require dimension reduction before a VAR representation can be estimated, but the effects that this reduction will have on subsequent inferences remains unclear. In this study, I simulated shape trajectories based on the movements of roundworms. I then reduced the number of variables that described each shape using principle components analysis. Based on these lower dimensional representations, I estimated the likelihood ratio test's Type I error rate and power with the simulated trajectories. In addition, I also used the same workflow on an empirical dataset of women walking (originally from Morris13) but also tried varying amounts of preprocessing before applying the workflow as well. The likelihood ratio test's Type I error rate was mildly inflated with the simulated shape trajectories but had a high probability of rejecting false null hypotheses. Without preprocessing, the likelihood ratio test for a common process had a highly inflated Type I error rate with the empirical data, but when the sampling density is lowered and the number of cycles is standardized within a comparison the degree of inflation becomes comparable to that of the simulated shape trajectories. Yet, these preprocessing steps do not appear to negatively impact the test's power. Visualization is a crucial step in geometric morphometric studies, but there are currently few, if any, methods to visualize differences in shape trajectories. To address this absence, I propose an extension to the classic vectordisplacement diagram. In this new procedure, the VAR representations for two trajectories' processes generate two simulated trajectories that share the same shocks. Then, a vectordisplacement diagram compares the simulated shapes at each time step. The set of all diagrams then illustrates the difference between the trajectories' processes. I assessed the validity of this procedure using two simulated shape trajectories, one based on the movements of roundworms and the other on the movements of earthworms. The result provided mixed results. Some diagrams do show comparisons between shapes that are similar to those in the original trajectories but others do not. Of particular note, diagrams show a bias towards whichever trajectory's process was used to generate pseudorandom shocks. This implies that the shocks to the system are just as crucial a component to a trajectory's behavior as the VAR model itself. Finally, in the third chapter I discuss a new R library to study dynamic systems and represent them as VAR and VARMA processes, iPARMA. Since certain processes can have multiple VARMA representations, the routines in this library place an emphasis on the reverse echelon format. For every process, there is only one VARMA model in reverse echelon format. The routines in iPARMA cover a diverse set of topics, but they all generally fall into one of four categories: simulation and study, model estimation, hypothesis testing, and visualization methods for shape trajectories. Within the chapter, I discuss highlights and features of key routines' algorithms, as well as how they differ from analogous routines in the R package MTS \citep{mtsCite}. In many regards, this dissertation is foundational, so it provides a number of lines for future research. One major area for further work involves alternative ways to represent a system as a VAR or VARMA process. For example, the parameter estimates in a VAR or VARMA model could depict a process as a point in parameter space. Other potentially fruitful areas include the extension of representational applications to other families of time series models, such as cointegrated models, or altering the generalized Procrustes algorithm to better suit shape trajectories. Based on these extensions, it is my hope that statistical inference based on stochastic process representations will help to progress what systems biologists are able to study and what questions they are able to answer about them.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Soda_fsu_0071E_13917_P
 Format
 Set of related objects