You are here
Time Parallelization Methods for the Solution of Initial Value Problems
Title:  Time Parallelization Methods for the Solution of Initial Value Problems. 
23 views
14 downloads 

Name(s): 
Yu, Yanan, author Srinvasan, Ashok, professor directing thesis Okten, Giray, university representative Kumar, Piyush, committee member Tyson, Gary, committee member Yuan, Xin, committee member Department of Computer Science, degree granting department Florida State University, degree granting institution 

Type of Resource:  text  
Genre:  Text  
Issuance:  monographic  
Date Issued:  2010  
Publisher:  Florida State University  
Place of Publication:  Tallahassee, Florida  
Physical Form: 
computer online resource 

Extent:  1 online resource  
Language(s):  English  
Abstract/Description:  Many scientific problems are posed as Ordinary differential Equations (ODEs). A large subset of these are initial value problems, which are typically solved numerically. The solution starts by using a known statespace of the ODE system to determine the state at a subsequent point it time. This process is repeated several times. When the computational demand is high due to large state space, parallel computers can be used efficiently to reduce the time of solution. Conventional parallelization strategies distribute the state space of the problem amongst processors and distribute the task of computing for a single time step amongst the processors. They are not effective when the computational problems have fine granularity, for example, when the state space is relatively small and the computational effort arises largely from the long time span of the initial value problem. The above limitation is increasingly becoming a bottleneck for important applications, in particular due to a couple of architectural trends. One is the increase in number of cores on massively parallel machines. The high end systems of today have hundreds of thousands of cores, and machines of the near future are expected to support the order of a million simultaneous threads. Computations that were coarse grained on earlier machines are often fine grained on these. Another trend is the increased number of cores on a chip. This has provided desktop access to parallel computing for the average user. A typical lowend user requiring the solution of an ODE with a small state space would earlier not consider a parallel system. However, such a system is now available to the user. Users of both the above environments need to deal with the problem of parallelizing ODEs with small state space. Parallelization of the time domain appears promising to deal with this problem. The idea behind this is to divide the entire time span of the initial value problem into smaller intervals and have each processor compute one interval at a time, instead of dividing the state space. The difficulty lies in that time is an intrinsically sequential quantity and one time interval can only start after its preceding interval completes, since we are solving an initial value problem. Earlier attempts at parallelizing the time domain were not very successful. This thesis proposes two different time parallelization strategies, and demonstrates their effectiveness in dealing with the bottleneck described above. The thesis first proposes a hybrid dynamic iterations method which combines conventional sequential ODE solvers with dynamic iterations. Empirical results demonstrate a factor of two to three improvement in performance of the hybrid dynamic iterations method over a sequential solver on an $8$ core processor, while conventional statespace decomposition is not useful due to the communication overhead. Compared to Picard iterations (also parallelized in the time domain), the proposed method shows better convergence and speedup results when high accuracy is required. The second proposed method is a datadriven time parallelization algorithm. The idea is to use results from related prior computations to predict the states in a new computation, and then parallelize the new computation in the time domain. The effectiveness of the this method is demonstrated on Molecular Dynamics (MD) simulations of Carbon Nanotube (CNT) tensile tests. MD simulation is a special application of initial value problems. Empirical results show that the datadriven time parallelization method scales to two to three orders of magnitude larger numbers of processors than conventional statespace decomposition methods. This approach achieves the highest scalability for MD on general purpose computers. The time parallel method can also be combined with state space decomposition methods to improve the scalability and efficiency of the conventional parallelization method. This thesis presents a combined datadriven time parallelization and state space decomposition method and adapts it in MD simulations of softmatter, which is typically seen in computational biology. Since MD method is an important atomistic simulation technique and widely used in computational chemistry, biology and materials, the datadriven time parallel method also suggests a promising approach for realistic simulations with long time span.  
Identifier:  FSU_migr_etd0911 (IID)  
Submitted Note:  A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy.  
Degree Awarded:  Summer Semester, 2010.  
Date of Defense:  April 27, 2010.  
Keywords:  Datadriven, Dynamic Iterations, Time Parallelization, Carbon Nanotube  
Bibliography Note:  Includes bibliographical references.  
Advisory Committee:  Ashok Srinvasan, Professor Directing Thesis; Giray Okten, University Representative; Piyush Kumar, Committee Member; Gary Tyson, Committee Member; Xin Yuan, Committee Member.  
Subject(s):  Computer science  
Persistent Link to This Record:  http://purl.flvc.org/fsu/fd/FSU_migr_etd0911  
Owner Institution:  FSU 
Yu, Y. (2010). Time Parallelization Methods for the Solution of Initial Value Problems. Retrieved from http://purl.flvc.org/fsu/fd/FSU_migr_etd0911