• Blog Home
  • Adapting Kubernetes to the SDN stack for telco

Adapting Kubernetes to the SDN stack for telco

/ Mohamed El-Serngawy / kubernetes, networking, open source, Symkloud, edge / No Comments

 

Developments to make cloud native more manageable for telco edge infrastructure

Day by day Kubernetes becomes the default cloud platform for microservices deployment. Despite the deployment environment of Kubernetes, whether based on VMs or bare-metal, Kubernetes networking is still evolving. By default Kubernetes uses Linux bridges and IPtable or IPVS to manage the underlying networking elements of the containers. However, in the SDN world, programming the network is essential. For example: you cannot use an SDN controller to program a linux-bridge in order to forward the traffic outside of a k8s cluster node. 

At Kontron, we had deployed and tested Kubernetes on our SYMKLOUD 2U HCI platform, covered in my previous blog THE GOOD, THE BAD AND THE SCALABLE. For some of our SP (service provider) and Telco partners it is absolutely mandatory to integrate Kubernetes with a SDN solution. In response, we started here at Kontron the Container Orchestration Engine (COE) project as a proof of concept to investigate the possible integration scenarios between Kubernetes and SDN controllers such as OpenDaylight. The aim of the COE project is to use Open vSwitch (OvS) as the underlying networking of a Kubernetes cluster, in order to facilitate the integration with OpenDaylight.

The COE consist of three modules:

Module Name Description
COE CNI (Container Network Interface) Responsible for attaching the k8s pods to the OvS bridge that is installed and configured on k8s nodes.
COE Kube-Proxy Responsible for setting up the Pods L2 & L3 networking. COE Kube-Proxy leverages OvS flow rules to allow the end-to-end communications between pods, services and external communications.
COE Watcher Responsible for sending the K8s pods/services networking info to the SDN controller such as OpenDaylight.

 

The below diagram illustrates the architecture of the COE integration with a Kubernetes cluster and OpenDaylight.

sdn-fig1-1

 

As you can see, integrating OvS with a Kubernetes cluster transfers the underlying networking to be an Openflow-based network. Inside each cluster node there are two (2) OvS bridges: br-int & br-ext. 

Br-int is managed by COE Kube-Proxy as a local Openflow controller and it constructs the flow rules to allow the communication between pods and services inside the Kubernetes cluster.

Br-ext is managed by OpenDaylight as an external Openflow controller. Br-ext holds one of the public IP-addresses that is assigned to the Kubernetes cluster and it forwards the traffic to br-in through a patch port.

The below diagram is an overview of the COE architecture for any particular worker node.

 sdn-fig2-1

The main purpose of having br-int & br-ext is to have a complete separation between the control plane represented on eth1 and the data plane represented on eth2. Such a separation is fundamental in the SDN world and proves much more practical for communication service providers (CSPs). We had provided a vagrant setup for the Kubernetes cluster and COE at: https://bitbucket.org/kci1/coe/src/master/k8s-vagrant

A typical use-case for using COE, Kubernetes and OpenDaylight in Edge cloud computing are any number of customer-based services. In the diagram below we show that different customers can have the same service in different slices. The enterprise customer receives a service based on the Pod IP-address for its needs of traffic encapsulation transparency, while the other customer is served in its normal external IP-address.

sdn-fig3.png

In order to simulate the previous use-case and determine the value of deploying Kubernetes, COE and OpenDaylight on the SYMKLOUD MS2900 Series platform. We built up a demo to setup all of the components together. The diagram below shows the complete solution deployed on one SYMKLOUD platform with TOR (top of the rack) switch and a PC client providing external communications.

sdn-fig4

OpenDaylight is deployed on one of the platform nodes in order to configure and manage the two MSH8920 switches running PicOS from Pica8 which are modular, hot-swappable units on the SYMKLOUD MS2920 platform. As OpenDaylight is managing these Pica8 switches in parallel with br-ext (OvS bridge) for each Kubernetes cluster worker node. We were able to dynamically configure the Pica8 switches in order to forward incoming traffic to the target destination within the Kubernetes cluster. Such an operation would not be dynamically possible using the default Kubernetes networking functions, as mentioned earlier.

So in conclusion, COE facilitates the integration between Kubernetes and OpenDaylight (SDN solution) on Symkloud to serve the needs of Service Providers and Telecom vendors seeking to adapt services to cloud native technologies and an SDN stack.

What are your experiences with ODL/SDN and cloud native scenarios? Share your thoughts below or come visit the Kontron team at booth S31 during Kubecon 2018 in Seattle, Dec. 11-13, 2018.

Ref:

 

 

Related posts

Latest Resources

Adapting Kubernetes to the SDN stack for telco

 

Developments to make cloud native more manageable for telco edge infrastructure

The Good, the Bad and the Scalable

How to Scale Kubernetes for Edge Computing Deployments

Kubernetes has become the de facto standard of container orchestration platforms. Since its...

Expanding CSP Services into New Regions: Challenges, Strategies and the Role of SDN/NFV

 

Techies at Mobile World Congress Getting Coffee

 

Fun with Kubernetes and Tensorflow Serving

{GUEST POST} About the Author: Samuel Cozannet is a Strategic Cloud Expert with strong technical background (OpenStack, Kubernetes, Big Data,...