A Push-Based Prefetching for Remote Caching RAM Grid

2009 
As an innovative grid computing technique for sharing the distributed memory resources in a high-speed widearea network, RAM Grid exploits the distributed computing nodes, and provides remote memory for the user nodes which are short of memory. The performance of RAM Grid is constrained with the expensive network communication cost. In order to hide the latency of remote memory access and improve the performance, the authors proposed the push-based prefetching to enable the memory providers to push the potential useful pages to the user nodes. For each provider, it employs sequential pattern mining techniques, which adapts to the characteristics of memory page access sequences, on locating useful memory pages for prefetching. They have verified the effectiveness of the proposed method through trace-driven simulations. DOI: 10.4018/jghpc.2009070801 IGI PUBLISHING This paper appears in the publication, International Journal of Grid and High Performance Computing,Volume 1, Issue 4 edited by Emmanuel Udoh and Frank Zhigang Wang © 2009, IGI Global 701 E. Ch colate Avenue, H rshey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.igi-global.com ITJ 5396 2 International Journal of Grid and High Performance Computing, 1(4), 1-15, October-December 2009 Copyright © 2009, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. heterogeneous nodes that are connected with a high-speed wide-area network. Using RAM Grid for caching will meet the characteristic of loosely coupled distributed computing environment, which emphasizes to provide “best effort” service, while does not guarantee the degree of performance improvement. In worst case, it is also acceptable that the performance does not raise (and does not drop), while in better network environment, it will gain much more benefits. Nowadays the campus or enterprise network is fast enough to meet the requirements of remote memory sharing, and the rapidly developing network technologies will make our approach more and more attracting. To facilitate later description, we classify the nodes in RAM Grid (Chu, et al., 2006) into five types. The user node is the consumer of remote memory, while the corresponding memory provider is called the busy node, which comes from the available node. A deputy node serves for one user node, and it acts as a broker and automatically searches available nodes for the user node. The intermediate node does not provide or consume any remote memory. It is ready to become a user node or available node. In order to study the potential performance improvement, we compare the overheads of data access for an 8KB block over local disk, NFS and RAM grid, which accesses remote memory through the wide area network with 2ms round-trip latency and 2MB bandwidth. From Table 1 we can observe that the caching mechanism in DRACO only reduces the overhead by 25%~30% compared to local disk or NFS access, and the major data access overhead in DRACO mainly comes from the network transmission cost (nearly 60%). Therefore, the performance of DRACO can obviously be more improved if we reduce or hide some of the transmission cost. Prefetching is an approach to hide the cost of low speed media among different levels of storage devices. In this article, we employ prefetching in DRACO in order to improve the performance. Differing from traditional I/O devices, in DRACO, the busy nodes, which provide remote memory for caching, often have extra CPU cycles. Therefore, the busy nodes can decide the prefetching policy and parameters by themselves, thus releasing the user nodes of DRACO, which are often dedicated to mass of computing tasks, from the process of prefetching. In contrast to traditional approaches, in which the prefetching data are decided by a rather simple algorithm in a user node, such a push-based prefetching scheme can be more effective.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []