The Curious Case of Kubernetes In the Enterprise

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
6 Min read

The Kubernetes web site has a whole section dedicated to case studies and customer success stories. It is full of well-known brands like Adidas, Amadeus, BlackRock, Booking.com, Box, and many others. Bose is bragging about how they run their worldwide IoT platform on Kubernetes, Capital One tells us of big data fraud detection projects and CERN describes how it uses Kubernetes to analyze 330 petabytes coming from its particle accelerators.

While looking at the list and reading the stories, you might ponder that these mega-corporations, these media giants, and these engineering behemoths are nothing like your business. Your business is smaller, poorer, and possibly not as tech savvy as they are. So maybe Kubernetes is not for you. Or is it?

The introduction of cloud computing by Amazon leveled the field for small companies, giving them access to capacity and capability previously available only to large enterprises with deep pockets. The introduction of Kubernetes (K8s) also levels the field for the enterprise, giving them the tooling and capabilities traditionally used by startups to massively grow their businesses.

Kubernetes is already extensively used by the majority of new companies born in the cloud. The adoption of K8s by the enterprise has been much slower for a variety of reasons. In the first instance, K8s is complex and involves a steep learning curve. To truly benefit from K8s, additional technology must be mastered including cache management and load balancing and it also requires the use of a new set of monitoring and health-checking tools.

Also, using K8s introduces major changes to the delivery pipeline and, as a result, changes the established procedures and regulation controls. For example, SOX IT controls must be revised. And any cost savings are not assured – it is very easy to over-allocate resources.

Having said all of this however, the enterprise should be doing all that they can to start adopting K8s. Let’s take a closer look.

K8s is Here for the Long Run

Kubernetes is certainly no fad – it looks like it’s here to stay. Indeed, according to one recent survey of 247 IT professionals working at organisations with 1,000 or more employees, it found that well over half (59%) are running Kubernetes in a production environment, with one-third (33%) operating 26 clusters or more and one-fifth (20%) running more than 50 clusters. Whilst this information doesn’t take into account the continued use of public cloud services, it certainly indicates that Kubernetes is gaining traction within local data centre environments.

Talent Retention

Within the IT community in general, salary doesn’t appear to be the lead driver when considering a new job. Indeed, more often than not, the opportunity for professional growth and learning is the juiciest carrot to dangle. The introduction of technology such as K8s, not only increases productivity amongst IT staff but also provides the space to work on interesting projects that lead to greater job satisfaction and retention. As enterprise continues to invest in K8s, the competition will only grow for employees skilled in K8s – growing your own talent is an increasingly attractive approach therefore.

The Case for KOTS

K8s is a perfect way for enterprise software vendors to deliver on-premise installation packages of COTS (Commercial-off-the-shelf). There is even an acronym for it – KOTS (Kubernetes-off-the-shelf). Enterprises buying KOTS must know how to work with, manage, secure, and monitor the new technology stack. The future of enterprise software is going to be Kubernetes applications delivered to enterprises so that they can run privately, securely, in their own environments. KOTS enables vendors to easily package an upstream and fully supported distribution of Kubernetes with their application, for those enterprises which have yet to fully embrace Kubernetes. Once deployed, KOTS gives administrators the ability to get an application configured and deployed using step-through configuration, automated preflight checks and one-click updates.

What’s the Good Stuff Then?

K8s has the potential to do for the enterprise what it does for Internet startups: reduce time to market, improve Service Level Agreements (SLA) and boost the bottom line. Today, every enterprise is a software business.

Enterprise CIOs are tasked with delivering applications with high quality and outstanding customer experience that rival those of all the usual giants. The need for speed and agility of innovation is driving the way companies are building, running and securing their modern applications – this in turn is transforming the software architecture into microservices. In turn, these microservices depend on containerised application and orchestration to hasten deployment of improvements and new capabilities essential to maintaining highly available, secure customer experiences.

Kubernetes can do this by introducing automation in a number of key areas such as, deployment of application services, configuration of application networks and distribution of services across infrastructure, to name but a few.

But Isn’t It Risky?

Of course, it is risky. But even if we do not consider the alternative cost of losing market relevance if you don’t do anything, there are things to know and things to do in order to mitigate the risks of adopting this new technology. The following applies if you want to adopt K8s as a modernisation enabler to your monolithic, legacy applications.

K8s is not a panacea for all of your IT woes, and your software may not be ready for containers and Kubernetes. Do not think that every functionality you have in your current monolith system can be broken down into microservices and just packed into containers (and then you’ll have a scalable and decoupled environment with ease of deployment as promised by the success stories). Instead, you may end up with a bunch of services that are still highly coupled and that you can only deploy together, without solving the initial problem of bulky and risky deployments.

Implementing K8s in your organization means making a significant technical effort and investment in integration with other technologies. Kubernetes relies on other projects, a lot of them open source, to manage services like registry, security, telemetry, networking and automation. Organisations need to recognise this and factor that into their implementation plans. A good workaround may be to use tested, integrated, ‘enterprise ready’ products such as Red Hat’s OpenShift.

Do not go the DIY route without having the right level of expertise in-house. Kubernetes is not a single executable, running on a single server. Rather, it is a conglomerate of different applications and network layers that are closely integrated to produce the final solution, including Docker, etc, load balancers, kubelet, kube-proxy, kube-apiserver, SDN, and many more.

IT often underestimates the complexity that comes with running highly available, secure applications on top of Kubernetes. While it is easy to get a Kubernetes cluster up and an application running in it, “up” and “production ready” are very different states. K8s requires a certain level of expertise to maintain it in production with your software running on top. Service health checks, infrastructure monitoring, application instrumentation, deployment strategies, networking, VM security, metadata API security and container security, all need to be planned for.

Another aspect that is often overlooked in K8s implementation, is that to truly benefit from Kubernetes, existing business and development practices need to be adapted (CI/CD, automated testing, and other best practices). The business and IT dept must understand the implications and ensure that they have the resources and skill set available, either in house or via a trusted partner.

It is tempting to throw everything at Kubernetes in view of all of its benefits. However, generally speaking, those benefits involve huge overall architecture changes to your application and, by implication, a large operational risk. Start with the low hanging fruit and migrate your monolith applications step by step.

When planning the process of migration from your monolith to actual microservices, you’ll first need to properly separate your data (in order to have really decoupled services) and then to re-integrate with the new, best practices that Kubernetes orchestration provides.

In Kubernetes, your application infrastructure is treated as cattle instead of pets. In Kubernetes containers have an ephemeral nature and they will be raised and disposed of as needed. This is an important point and should be a major principle when you start to bring applications into containers. Containers are not VMs, nor should they be treated as such. Stateful data should be handled in a responsible way.

Do not neglect the proper setting of resource constraints of all the running containers. Your application will crash, unexpected and intermittent service disruptions will occur. Without setting CPU and memory constraints for each of the running containers, certain containers may become resource hogs and leave little for their neighbours.

Kubernetes is here to stay, accelerating the digital transformation for countless enterprises. Don’t leave it too late.

If you have any questions, contact us today to help you out with your performance and security needs.

Latest Articles

Interactive Demo: Kubernetes Security Workshop

Abstract If you are starting out on your Kubernetes journey with limited resources and an abundance of questions, please join us for our Hands-on Kubernetes Workshop. Our team of Kubernetes and security experts from GlobalDots, Cloudical, and Octarine are coming together to provide solutions for unavoidable risks when deploying a new environment, and to showcase […]

GlobalDots
12th April, 2021

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

Unlock Your Cloud Potential