NB-Cache: Non-Blocking In-Network Caching for High-Performance Content Routers

2021 
Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to in-network caching and native multicast. To support these features, a content router needs high performance at its data plane, which consists of three forwarding steps: checking the Content Store (CS), then the Pending Interest Table (PIT), and finally the Forwarding Information Base (FIB). In this work, we build an analytical model of the router and identify that CS is the actual bottleneck. Then, we propose a novel mechanism called “NB-Cache” to address CS’s performance issue from a network-wide point of view. In NB-Cache, when packets arrive at a router whose CS is fully loaded, instead of being blocked and waiting for the CS, these packets are forwarded to the next-hop router, whose CS may not be fully loaded. This approach essentially utilizes Content Stores of all the routers along the forwarding path in parallel rather than checking each CS sequentially. NB-Cache follows a design pattern of on-demand load balancing and can be formulated into a non-trivial N-queue bypass model. We use the Markov chain to establish its theoretical base and find an algorithm for automated transition rate matrix generation. Experiments show significant improvement of data plane performance: 70% reduction in round-trip time (RTT) and 130% increase in throughput. NB-Cache decouples the fast packet forwarding from the slower content retrieval thus substantially reducing CS’s heavy dependency on fast but expensive memory.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []