![]() ![]() ![]() |
![]() |
|
|
![]() ![]() ![]() ![]() ![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Return to Caching (Session B1) Recent studies have shown that cache- conscious indexes outperform conventional main memory indexes. Cache-conscious in- dexes focus on better utilization of each cache line for improving search performance of a sin- gle lookup. None has exploited cache spa- tial and temporal locality between consec- utive lookups. We show that conventional indexes, even "cache-conscious" ones, suffer from significant cache thrashing between ac- cesses. Such thrashing can impact the per- formance of applications such as stream pro- cessing and query operations such as index- nested-loops join. We propose techniques to buffer accesses to memory-resident tree-structured indexes to avoid cache thrashing. We study several al- ternative designs of the buffering technique, including whether to use fixed-size or variable- sized buffers, whether to buffer at each tree level or only at some of the levels, how to support bulk access while there are concur- rent updates happening to the index, and how to preserve the order of the incoming lookups in the output results. Our methods improve cache performance for both cache-conscious and conventional index structures. Our exper- iments show that buffering techniques enable a probe throughput that is two to three times higher than traditional methods. ![]() ©2004 Association for Computing Machinery |