Abstract:
State-of-the-art embedded processors find their use in several domains like vision-based and big data applications. Such applications require a huge amount of information...Show MoreMetadata
Abstract:
State-of-the-art embedded processors find their use in several domains like vision-based and big data applications. Such applications require a huge amount of information per task, and thereby need frequent main memory accesses to perform the entire computation. In such a scenario, a bigger size last level cache (LLC) would improve the performance and throughput of the system by reducing the global miss rate and miss penalty to a large extent. But this would lead to increased power consumption due to the extended cache memory, which becomes more significant for battery-driven mobile devices. Near threshold operation of memory cells is considered as a notable solution in saving a substantial amount of energy for such applications. We propose a cache architecture that takes advantage of both near threshold and standard LLC operation to meet the required power and performance constraints. A controller unit is implemented to dynamically drive the LLC to operate at standard or near threshold operating region based on application specific operations. The controller can also power gate a portion of LLC to further reduce the leakage power. By simulating different MiBench benchmarks, we show that our proposed cache architecture can reduce average energy consumption by 22% with a minimal average runtime penalty of 2.5% over the baseline architecture with no cache reconfigurability.
Date of Conference: 22-24 October 2018
Date Added to IEEE Xplore: 01 July 2019
ISBN Information: