ABSTRACT
We explore processor-cache affinity scheduling of parallel network protocol processing in a setting in which protocol processing executes on a shared-memory multiprocessor concurrently with a general workload of non-protocol activity. We find that affinity scheduling can significantly reduce the communication delay associated with protocol processing, enabling the host to support a greater number of concurrent streams and to provide a higher maximum throughput to individual streams. In addition, we compare implementations of two parallelization approaches (Locking and Independent Protocol Stacks) with very different caching behaviors.
- 1.J. Salehi, J. Kurose, D. Towsley. "Scheduling for Cache Affinity in Parallelized Communication Protocols". TR UM-CS-1994- 075, U Massachusetts, Oct. 1994. (available via ftp from gaia.cs.umass.edu in pub/Sale94:Scheduling.ps.Z) Google ScholarDigital Library
- 2.J. Singh, H. Stone and D. Thiebaut. "A Model of Workloads and Its Use in Miss-Rate Prediction for Fully Associative Caches". IEEE Transactions on Computers, 41 (7):811-825, Jul. 1992. Google ScholarDigital Library
- 3.M. Squillante and E. Lazowska. "Using Processor Cache Affinity Information in Shared-Memory Multiprocessor Scheduling". IEEE Transacttons on Parallel and Distributed Systems, 4(2):131-143, Feb. 1993. Google ScholarDigital Library
- 4.D. Thiebaut and H. Stone. "Footprints in the Cache". A CM Transactions on Computer Systems, 5(4):305-329, Nov. 1987. Google ScholarDigital Library
Index Terms
- Scheduling for cache affinity in parallelized communication protocols
Recommendations
Scheduling for cache affinity in parallelized communication protocols
We explore processor-cache affinity scheduling of parallel network protocol processing in a setting in which protocol processing executes on a shared-memory multiprocessor concurrently with a general workload of non-protocol activity. We find that ...
The performance impact of scheduling for cache affinity in parallel network processing
HPDC '95: Proceedings of the 4th IEEE International Symposium on High Performance Distributed ComputingWe explore processor-cache affinity scheduling of parallel network protocol processing, in a setting in which protocol processing executes on a shared-memory multiprocessor concurrently with a general workload of non-protocol activity. We find that ...
Comments