Using Idle Workstations to Implement Predictive Prefetching

ID
TR-2000-06
Authors
Jasmine Y.Q. Wang, Joon Suan Ong, Yvonne Coady and Michael J. Feeley
Publishing date
June 12, 2000
Length
8 pages
Abstract
The benefits of Markov-based predictive prefetching have been largely overshadowed by the overhead required to produce high quality predictions. While both theoretical and simulation results for prediction algorithms appear promising, substantial limitations exist in practice. This outcome can be partially attributed to the fact that practical implementations ultimately make compromises in order to reduce overhead. These compromises limit the level of algorithm complexity, the variety of access patterns, and the granularity of trace data the implementation supports. This paper describes the design and implementation of GMS-3P, an operating-system kernel extension that offloads prediction overhead to idle network nodes. GMS-3P builds on the GMS global memory system, which pages to and from remote workstation memory. In GMS-3P, the target node sends an on-line trace of an application's page faults to an idle node that is running a Markov-based prediction algorithm. The prediction node then uses GMS to prefetch pages to the target node from the memory of other workstations in the network. Our preliminary results show that predictive prefetching can reduce remote-memory page fault time by 60% or more and that by offloading prediction overhead to an idle node, GMS-3P can reduce this improved latency by between 24% and 44%, depending on Markov-model order.