Lambda Sharing Demonstration via Traffic-Driven Lambda-on-Demand

2007 
This paper proposes GMPLS-capable Lambda-on-Demand to adjust the number of load-balanced endto-end lambdas according to the traffic volume while successfully sharing lambda resources to connect between any two of three nodes for the first time. Introduction The development of high-performance networks comprising Photonic Cross-Connects (PXCs) with Generalized Multi-Protocol Label Switching (GMPLS) capability [1, 2] has driven the increase in the number of high-end wide-area-network (WAN) applications [3]. Such applications target parallel computing to ensure unlimited scalability. Inevitably there will be a need for load-balanced parallel lambdas to carry their traffic. The number of lambdas should be optimized to as few as possible. Therefore, these lambdas should be configured dynamically depending on the widely fluctuating demands from such applications [4-6]. In addition, lambda resource sharing should be implemented to maximize the network resource usage. We anticipate that such a function could yield cost-effective network implementation by minimizing the network resources to be installed. In this paper, we present the Lambda-on-Demand functionality using a shared lambda to connect any two of three PXCs comprising a triangular network topology in an OSPF-enabled IP-over-photonic network achieved using novel control servers. Lambda-on-Demand scheme Figure 1 illustrates the network configuration for this study. Gigabit Ethernet link (GbE) #1 traverses Router #1, PXC #1, PXC #3, and Router #3. GbE #2 connects Router #1 to Router #3 via PXC #1-#3. On the other hand, GbE #3 and GbE #4 are connected by Router #2 and Router #3 via PXC #2 and PXC #3. Tester #1 and Tester #2 generate and send packets to Tester #3. GbE #2 and GbE #4 share lambda resources between PXC #2 and PXC #3. The traffic volume per second fluctuates between 200 Mbps and 1.6 Gbps. As shown in Fig. 1, we developed and installed three Lambda-on-Demand-capable control servers supporting SNMP, Telnet, GMPLS, and a proprietary protocol to collaborate with the other control servers. Figure 2 shows an example of a messaging diagram among control servers and PXCs to establish or delete GbE #2 to increase or decrease the total link capacity between Router #1 and Router #3. When we run Lambda-on-Demand between Router #1 and Router #3 and set up GbE #1, we initiate Control server #1 (see the initial step). At that time, both Control server #1 and Control server #3 independently select an available Link Aggregation Group (LAG) number. In addition, the control servers exchange information pertaining to the selected LAG number. Next, the interworked control servers check and select the available interfaces to terminate GbE #1. On the other hand, Control server #1 sets up GbE #1 via GMPLS in collaboration with Control server #3 and the PXCs. The control servers also activate the selected interfaces and eventually add the selected interfaces into the LAG via Telnet. Subsequently, Control server #1 monitors the traffic on GbE #1 via SNMP every 8 seconds. Fig. 1. Network Configuration When the traffic volume per second on GbE #1 exceeds 800 Mbps, the interworked control servers check and select the available interfaces that belong in the selected LAG (see the increase step). Next, Control server #1 establishes GbE #2 to connect the selected interfaces via GMPLS. Furthermore, the interworked control servers activate the interfaces and add the selected interfaces into the LAG via Telnet. The packets streaming on GbE #1 and GbE Shared resources GMPLS GbE #2
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []