Using Dead Blocks as a Virtual Victim Cache
dc.contributor.author | Khan, Samira | |
dc.contributor.author | Jiménez, Daniel A. | |
dc.contributor.author | Burger, Doug | |
dc.contributor.author | Falsafi, Babak | |
dc.date.accessioned | 2023-10-25T15:19:18Z | |
dc.date.available | 2023-10-25T15:19:18Z | |
dc.date.issued | 2009-09 | |
dc.description.abstract | Caches mitigate the long memory latency that limits the performance of modern processors. However, caches can be quite inefficient. On average, a cache block in a 2MB L2 cache is dead 59% of the time, i.e., it will not be referenced again before it is evicted. Increasing cache efficiency can improve performance by reducing miss rate, or alternately, improve power and energy by allowing a smaller cache with the same miss rate. This paper proposes using predicted dead blocks to hold blocks evicted from other sets. When these evicted blocks are referenced again, the access can be satisfied from the other set, avoiding a costly access to main memory. The pool of predicted dead blocks can be thought of as a virtual victim cache. A virtual victim cache in a 16-way set associative 2MB L2 cache reduces misses by 11.7%, yields an average speedup of 12.5% and improves cache efficiency by 15% on average, where cache efficiency is defined as the average time during which cache blocks contain live information. This virtual victim cache yields a lower average miss rate than a fully-associative LRU cache of the same capacity. Using an adaptive insertion policy, the virtual victim cache gives an average speedup of 17.3% over the baseline 2MB cache. The virtual victim cache significantly reduces cache misses in multi-threaded workloads. For a 2MB cache accessed simultaneously by four threads, the virtual victim cache reduces misses by 12.9% and increases cache efficiency by 16% on average Alternately, a 1.7MB virtual victim cache achieves about the same performance as a larger 2MB L2 cache, reducing the number of SRAM cells required by 16%, thus maintaining performance while reducing power and area. | |
dc.description.department | Computer Science | |
dc.identifier.uri | https://hdl.handle.net/20.500.12588/2158 | |
dc.language.iso | en_US | |
dc.publisher | UTSA Department of Computer Science | |
dc.relation.ispartofseries | Technical Report; CS-TR-2009-009 | |
dc.title | Using Dead Blocks as a Virtual Victim Cache | |
dc.type | Technical Report |