Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Memory system hierarchy has remained unchanged for many years, leading to a growing gap between main memory access times and a local disk's paging latencies. This trend has especially become a performance bottleneck for memory intensive applications. These applications can quickly eat up all available main memory, forcing the kernel to start swapping to the disk. One solution to this problem is to insert a new level -- the remote memory -- in the traditional memory hierarchy between local main memory and local disk. Earlier work on the Adaptive NEtwork MemOry engiNE (ANEMONE) system demonstrated that remote memory access is a viable and attractive solution to this problem when the paging process exhibits a random block access pattern. This thesis evaluates the network communication latency of the Anemone system using three mechanisms: 1) a kernel-level lightweight reliable datagram protocol to replace NFS, 2) an aggressive page acknowledgment policy, and 3) a two-level caching mechanism. Collectively, these three techniques reduce the average network paging latency from 800 microseconds to 500 microseconds and speed up the average application execution time by a factor of 3 to 9.
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). The copyright in theses and dissertations completed at Florida State University is held by the students who author them.