• Blog Home
  • The Good, the Bad and the Scalable

The Good, the Bad and the Scalable

/ Mohamed El-Serngawy / Symkloud, open source, networking, kubernetes / No Comments

How to Scale Kubernetes for Edge Computing Deployments

Kubernetes has become the de facto standard of container orchestration platforms. Since its release in June 2014, it has been adopted by the biggest players in public cloud platform providers – Google Compute Engine (GCE), AWS and Azure, to name a few. Its scalability specifications have been tied to public cloud platform use cases such as that of GCE.

The criteria that Kubernetes requires in order to support and build a large cluster are:

  • No more than 5000 nodes;
  • No more than 150000 total pods;
  • No more than 300000 total containers;
  • No more than 100 pods per node.

Public cloud providers typically use the above configuration to summarize how far they can scale up or down their services for their customers. However, for a Kubernetes cluster deployment scenario on Bare Metal, one could also consider other hardware specifications – and still be successful.

At Kontron, we deploy Kubernetes clusters using the SYMKLOUD MS2900 series of converged platforms which are designed with nine (9) modular server nodes, which are primarily used for Telco Core and for intelligent Edge network computing use cases, particularly high density container (pods) deployments.

One criteria that limits our deployments is having 100 pods per node. For a SYMKLOUD modular server node supporting 2 independent CPUs with a total of 24 cores and up to 124G of RAM, one can understand why 100 pods per node is a significant restriction.

Canonical’s distribution of Kubernetes at the Edge

To provide some technical background on our Kubernetes Edge computing use case, we deploy Kubernetes using MaaS and Juju tools from Canonical Ubuntu, which makes what “looks like” a complicated process, much more simple. The diagram below reflects a standard deployment architecture configuration using a single SYMKLOUD MS2900 platform supporting the nine (9) modular servers nodes.


 Figure 1. SYMKLOUD MS2900 platform configuration of redundant Kubernetes and Docker workers

As you can see, one node is reserved for the deployer (Maas & Juju), three Kubernetes master nodes are set to HA (High Availability) mode and the balance are Kubernetes worker nodes. The table below indicates the SYMKLOUD hardware specifications used in our deployment model.

MSP8060 series of modular servers Intel 32 core CPU / 64G RAM
MSP8040 series of modular servers Intel 24 core CPU / 64G RAM

 Symkloud Kubernetes configuration Symkloud MS2900 Kubernetes high density pods

 Figure 2. Two SYMKLOUD MS2900 platforms for high performance pod configuration.

The Steps we Took to Scale at the Edge

In order to expand our Kubernetes cluster capabilities, we’ve increased the Kubernetes worker pod configuration to 254 pods per node. On a side note, we are currently using Flannel and Calico as the default networking plugin for our Kubernetes cluster which limits us to 254 pods per node.

However in the near future, we plan to replace that with our built-in plugin allowing us to have more control over Kubernetes cluster networking and scale to well above 254 pods per node.

As another side note, we encountered some issues in deploying 5,000 nodes to test the max nodes / kubernetes-cluster that we could possibly manage. As you can see each Symkloud enclosure is capable of holding 9 nodes. To deploy 5000 nodes we need 556 chassis.

Despite that, the Symkloud chassis is a compact design (21D inch, 19W inch, 3.5H inch) that is more ideal to easily deploy at edge computing sites. However, having 5,000 physical nodes deployed at edge computing sites and having them managed all by one kubernetes-cluster would rarely happen.

Consider then a deployment of 45 nodes distributed across five (5) platforms all connected through a TOR switch.

Based on the following pod configurations, you can get a better sense of how many pods we can support in our deployments using SYMKLOUD hardware.


High Performance Pod High Density Pod
memory: 500Mi
memory: 50Mi
cpu: 1000m
memory: 150Mi
CPU: 250m
cpu: 100m
memory: 50Mi
CPU: 100m


The number of pods in relation to the number of Kubernetes worker nodes:


No. of Nodes High Performance Pod High Density Pod
10 k8s-worker 1,280 pods 2,540 pods
25 k8s-worker 3,200 pods 6,350 pods 
41 k8s-worker 5,760 pods 11,430 pods


Test Results

We have evaluated the SYMKLOUD Kubernetes cluster deployment by running multiple test cases whereby maximum capacity was reached, followed by a response time and health check of the individual cluster. Based on these test results, we created a chart that illustrates the relation between Kubernetes cluster response times and number of nodes. We can clearly see degradation in the response time as the number of nodes increases, impacting the cluster’s health.

Kubernetes cluster graph

Our Findings and future plans

Judging by our tests, Kubernetes running on a SYMKLOUD MS2900 converged platform promises high performance and scalability when managing containers for Edge computing deployments.

Although there are limitations in regards to the number of pods per node in the networking area, we can improve the Kubernetes cluster’s health and response time by working on the Kubernetes federation.

Having multiple clusters managing a small number of nodes can without a doubt improve the response time of the individual cluster, nevertheless we must continue to work on the communication between those clusters.

My next blog will be cover these topics and more when it comes to optimizing Kubernetes for a telco edge environment while still preserving its open source mandate.

What Kubernetes projects are you working on? What challenges are you facing? Share your thoughts in the comments section below.




Related posts