Abstract:
Packet classification is a building block in many network services, such as routing, monitoring, and policy enforcement. In commodity switches, classification is often pe...Show MoreMetadata
Abstract:
Packet classification is a building block in many network services, such as routing, monitoring, and policy enforcement. In commodity switches, classification is often performed by memory components of various rule matching patterns (longest prefix match, ternary matches, exact match, and so on). The memory components are fast but expensive and power-hungry with power consumption proportional to their size. In this paper, we study the applicability of rule caching and lossy compression to create packet classifiers requiring much less memory than the theoretical size limits of the semantically-equivalent representations, enabling significant reduction in their cost and power consumption. This paper focuses on the longest prefix matching. Our objective is to find a limited-size longest prefix match classifier that can correctly classify a high portion of the traffic, so that it can be implemented in commodity switches with classification modules of restricted size. While for the lossy compression scheme a small amount of traffic might observe classification errors, a special indication is returned for traffic that cannot be classified in the rule caching scheme. We develop optimal dynamic-programming algorithms for both problems and describe how to treat the small amount of traffic that cannot be classified. We generalize our solutions for a wide range of classifiers with different similarity metrics. We evaluate their performance on real classifiers and traffic traces and show that in some cases we can reduce a classifier size by orders of magnitude while still classifying almost all traffic correctly.
Published in: IEEE/ACM Transactions on Networking ( Volume: 25, Issue: 2, April 2017)