If you work in the IT industry, you must have heard of Kubernetes (if not, I'm afraid you need to hustle!). Initially released in 2014, and formerly known as Borg inside Google, Kubernetes are now taking over the world of application development in a swift and unstoppable way. Its many benefits include declarative deployment, fault tolerance and auto-scaling, and Kubernetes take infrastructure complexity out of the equation for application development. Developers no longer worry about underlying infrastructure running their applications, making DevOps possible and streamlining the application delivery process.
Before discussing Kubernetes, it is necessary to mention Containers and Docker. Containers are like a VM, but much more lightweight and efficient, packaging only the application code and its dependencies. Containers are MBs in size and can be booted up in milliseconds, and are more portable so long as the universal container run-time is provided. Thanks to Docker and its container runtime Docker Engine, container adoption has grown dramatically, making effective management a challenge.
Companies like Google have millions of containers running around the world, making it nearly impossible to manage them manually. And that's where Kubernetes come into the big picture.
Kubernetes (K8s) is an open-source for automating deployment, scaling and management of containerized applications. K8s is the middle man that sits in between developers and applications to ensure the desired state of applications is met. Say an application is run by containers. How do we ensure that the containers are always available? When the workload increases, how do we initialize more containers to support it, and how do we expose the service to users and balance the workload to containers.
K8s drastically simplify the infrastructure operations and elevate developer's focus to business logic. Though a graduated program from CNCF, the open-source version of K8s is not as user friendly as you'd imagine. Not to mention the many components that make up K8s, like API server, controller, scheduler, Pods, and kubelets, in addition to service, deployment, secrets, ingress and replicas. Ensuring K8s work effectively in the enterprise environment requires a steep learning curve.
And that's why Sangfor developed KubeManager. KubeManager is based on K8s but with a much more simplified GUI and comprehensive integrations with enterprise-ready features like monitoring and logging, image registry and application store. Leveraging Sangfor HCI on the infrastructure layer, Sangfor KubeManager delivers the key ingredients of K8s by abstracting the underlying complexity. It supports multi-cluster and multi-cloud K8s management, where containerized workloads can be placed anywhere (public cloud or private cloud) that's considered suited and managed by KubeManager. Change doesn't happen overnight, VM still plays a major role in hosting enterprise applications. But the future is already on the horizon, with applications built in the next 5 years projected to exceed the number of total applications we have now,and most of these new applications will be run in containers. Sangfor KubeManager on HCI offers a seamless path for customers to evolve their business from VMs to containers, getting them ready for the digital future.