MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
kubernetes
Search

Will Kubernetes ever get easier?

Monday February 10, 2025. 10:00 AM , from InfoWorld
Kubernetes hardly needs an introduction. After Linux, it’s the second-highest velocity open source project in the world. Over the years, thousands of companies have contributed to the platform. At the heart of the thriving cloud-native ecosystem, Kubernetes has become a platform of all things, the foundation for everything from CI/CD pipelines, to machine learning training clusters, to distributed databases with planetary scale.

Kubernetes, which recently celebrated its 10th birthday, is more flexible than ever. It now supports many workload types out-of-the-box—stateless, stateful, and data processing workloads, automated batch jobs, event-driven autoscaling workloads, and more. Custom resource definitions and operators now quickly grant additional features like observability, networking, backup and recovery, and policy management.

The community has devised tools to tackle many roadblocks, like multi-cluster management, porting legacy virtual machines, deploying to edge or bare-metal environments, and using Kubernetes for AI development—helping satisfy more and more use cases, all the while inventing cuddly mascots for new projects entering the ballooning cloud-native landscape.

All this cushy progress has put Kubernetes front and center in the future of enterprise cloud infrastructure. The Cloud Native Computing Foundation says 84% of organizations are either using Kubernetes in production or evaluating it, according to the CNCF’s 2023 annual survey. But, as Kubernetes adds more functionality and absorbs more workloads, is it getting any easier to use?

Remarkable progress

“Kubernetes has definitely gotten a lot easier and more stable to use,” says Murli Thirumale, general manager of Portworx, Pure Storage’s cloud native business unit. He credits this to Kubernetes having triumphed in the container scheduler wars of the 2010s, which pitted Kubernetes against Docker Swarm, Apache Mesos, HashiCorp Nomad, and others. After reaching a consensus, the industry aligned to improve the base deployment and build solutions on top, leading to a more stable distribution and consolidation of vendors for core functions.

Nowadays, hyperscalers offer managed Kubernetes distributions, like Amazon Elastic Kubernetes Service (Amazon EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). These, as well as plenty of alternatives, provide user-friendly, GUI-based options to manage clusters.

As a result, 84% of organizations are moving away from self-hosted Kubernetes and toward managed services, according to The State of Kubernetes 2023 survey. But this is not to say that innovation has halted by any means.

“There are many areas where ease of use is improving,” says Peter Szczepaniak, senior product manager of Percona. He specifically highlights how Kubernetes operators are making many areas more accessible, such as running complex workloads like databases with high levels of automation. Other advancements in Kubernetes itself are enhancing usability, he adds, such as expanding support of CSI drivers, affinity and anti-affinity configuration, and stateful sets that aim to directly address specific use cases.

Enterprise-centric tools also cover peripheral areas, easing usability for backup and disaster recovery. “The alternative is to script it yourself with various bits of open source code and stitch it together,” says Gaurav Rishi, vice president of product and cloud native partnerships at Kasten by Veeam. “Things have become a lot easier, including security. Even five years ago, enterprises weren’t productizing clusters because security was a concern,” he adds.

Kubernetes began with a robust REST API, allowing all resources to be acted upon with calls to API objects. Over the past decade, we’ve witnessed impressive alignment in the cloud-native industry, and the surrounding community has matured significantly. As an example, the CNCF offers certification for over 100 Kubernetes distributions and installers from vendors to ensure they actually conform to the expected APIs.

Rising complexity

That said, common snags remain. 75% of Kubernetes practitioners encounter issues affecting the running of their clusters, up from 66% in 2022, found Spectro Cloud’s 2023 State of Production Kubernetes report. It’s common to run 10 or more clusters, and a good chunk of developers (33%) find themselves begrudgingly doing platform work.

Many users say further abstraction is necessary, as there is still significant overhead to setting up Kubernetes, from provisioning to configuring security, hooking up monitoring, and so on. Although inroads have been made for small-to-medium businesses, Kubernetes is still often overkill for small projects and small companies that lack skill sets and resources. Interestingly, a 2022 survey from D2iQ found that only 42% of applications running on Kubernetes actually make it into production.

“It’s easier to get started with Kubernetes today,” says Itiel Shwartz, co-founder and CTO of Komodor. “Now it’s a sort of cloud itself, and it’s expected that services like your database, certificate provider, and load balancer will all run under Kubernetes as well. It’s gotten much more complicated to productionize.”

Significant gains have been made to lower the barrier to entry to Kubernetes and improve extensibility. But now the pressure is on to account for this diversity, leading to new problems on “day two.” “Kubernetes is mature, but most companies and developers don’t realize how complex of an environment it can be until they’re actually at scale,” says Ari Weil, cloud evangelist and VP of product marketing at Akamai.

Where users hit gotchas

Kubernetes users often encounter gotchas in a few areas. One barrier is simply contending with the sheer number of tools out there. The CNCF Cloud Native Landscape now hosts 209 projects with 889 total repositories, enough to cause analysis paralysis.

While tools like Helm and Argo CD streamline managing add-ons, each tool has unique configuration needs, which can introduce friction, explains Komodor’s Shwartz. “Add-ons in Kubernetes environments are notoriously difficult to manage because they introduce layers of complexity and require diverse expertise,” he says.

Not all cloud-native tools are created equal, adds Portworx’s Thirumale. He sees this influencing some organizations to make a safer bet on larger, established vendors instead of smaller innovators, especially for high-risk areas like AI or virtualization, where the cost of failure is high.

“Kubernetes is 10 years young,” jokes Rishi. There is plenty of innovation occurring, but he agrees the complex ecosystem can be confusing to grasp. On top of that are the cybersecurity risks that still plague containers, and many organizations must go through hoops to comply with stringent security requirements.

For instance, this could entail hardening data handling with the proper algorithms to meet NIST’s Federal Information Processing Standards (FIPS) or adopting continual container scanning practices to meet the FedRAMP Vulnerability Scanning Requirements for Containers. “On one hand, threat vectors are increasing, but on the other hand, the ecosystem has grown,” says Kasten’s Rishi.

Other common bottlenecks include setting up probes to check the health of applications, setting the proper requests and limits, and configuring network policies. “Improperly configured resource limits can lead to issues like pod evictions, unbalanced workloads, and unexpected application failures,” says Shwartz.

Another pitfall is simply a lack of widespread knowledge about how to use Kubernetes, says Percona’s Szczepaniak. Whereas virtual machines have been around for over twenty years, Kubernetes is still growing in mindshare within enterprises. “I’ve talked to many devops teams that get immediate pushback when introducing new technology if it requires any Kubernetes knowledge from the rest of the organization.”

Working with legacy workloads

At the same time, Kubernetes is becoming harder for devops teams to resist. Described as the cloud’s OS, Kubernetes now caters to far more than just container scheduling. Its scope now encompasses networking and storage through plugins using the Container Network Interface (CNI) and Container Storage Interface (CSI). “Through these extensions, Kubernetes became a multi-cloud control plane for infrastructure,” says Thirumale.

Kubernetes is being used to handle older workload types, too. The emerging KubeVirt tool, for instance, can schedule virtual machines (VMs) that often support legacy mission-critical applications. “This is a huge step forward for Kubernetes,” says Thirumale, who believes Kubernetes has become better positioned to enter the mainstream of existing applications in high-volume transaction areas like automotive, banks, old industrial companies, and beyond.

Others agree support for modern virtualization is a significant shift. “Before, you had two ships in the night: virtual machines on the legacy side versus cloud-native Kubernetes workloads,” adds Rishi. “Today, you can have all the best of what Kubernetes brings in terms of operations and development and faster velocity, even with virtual machines.”

Granting legacy workloads admission to cloud-native infrastructure is intriguing not only to modernize legacy enterprise stacks but also to avoid price hikes due to Broadcom’s consolidation of VMware offerings.

That said, legacy workloads carry different requirements around data migration, backup, disaster recovery, and securing access, says Thirumale. “We’re adding new functionality to Kubernetes data management that wasn’t needed for containers but is needed for virtual machines,” he says. Naturally, support for these additional use cases brings additional complexity for users.

Supporting AI workloads

Another area begging for Kubernetes — and for better usability — is AI development. AI lends itself to containerization, says Thirumale, making Kubernetes a natural fit for both model training and inferencing workloads. “AI and Kubernetes are almost 90% overlapped,” he says. “By its very nature, AI demands an elastic infrastructure.”

Yet, applying Kubernetes to new AI-native applications and managing the scalability of AI inference models on Kubernetes clusters are major challenges, says Vultr CMO Kevin Cochrane. “The operational discipline and playbooks for hosting containerized applications on Kubernetes need to be revisited,” he says. “There’s been little advancement in the tooling to support an integrated pipeline of AI models from training, to tuning, to inference, to global scalability within a Kubernetes cluster.”

“Machine learning workloads need to be deployed in many places, get resources, and be very elastic,” says Chase Christiansen, Staff Solutions Engineer at TileDB. While Kubernetes is a solid foundation for AI, using it alone for this purpose requires deploying ad-hoc services, making special operators for those services, building a pipeline, and handling unique scheduling requirements. For example, long-running jobs can fail overnight, complicating workflows without built-in mechanisms for retries or resilience, says ​​Christiansen.

Kubeflow, an incubating CNCF project, addresses these challenges head-on by making it easier to deploy and manage machine learning models on Kubernetes. Organizations like CERN, Red Hat, and Apple use and contribute to Kubeflow, developing an ecosystem and toolkit for training AI models that seamlessly integrates with cloud-native architectures.

When paired with Kserve for framework-agnostic AI inferencing, Kubeflow can simplify workflows for data scientists. “These tools are making AI much easier to use because you’re abstracting infrastructure away,” says Christianson. “This helps the user experience of creating models, helping drive business outcomes.”

AI inferencing at the edge will likely become more commonplace for niche use cases. That said, cloud container deployments will persist, necessitating a common control layer. “You want a continuum of being able to manage multiple clusters, regardless of where they are deployed, to be able to mobilize them, and not be locked into a specific vendor,” says Kasten’s Rishi.

Moving Kubernetes to the edge

For AI inferencing and many other use cases where proximity to data and latency are top concerns, Kubernetes is finding a role in edge computing, and providing options for smaller container deployments.

“Kubernetes use cases on the edge are growing for sure,” says Rishi. This is alluring for government installations that require air-gapped secure environments or for retail and internet of things (IoT) devices with bandwidth or network limitations, he says. For instance, the US Department of Defense is actively deploying Kubernetes in areas with zero internet connectivity, from battleships to F-16 fighter jets.

Stripped-down Kubernetes flavors have emerged to enable lightweight edge container distributions. Rishi points to K3s, a light Kubernetes distribution for resource-constrained environments, and Bottlerocket, a portable Linux-based operating system for running containers. Major cloud providers also provide options for smaller container deployments, from Microshift, a slimmer alternative to OpenShift for edge devices, to Fargate, AWS’s serverless compute for containers. Alternatives like Amazon Elastic Container Service (Amazon ECS) or HashiCorp Nomad eschew Kubernetes entirely for simpler use cases.

“Kubernetes is definitely getting easier,” says Raghu Vatte, Field CTO and VP of Strategy at ZEDEDA. Yet, on the edge, you could be managing tens of thousands of tiny clusters, he explains, necessitating a centralized mechanism to orchestrate and deploy containers using a heterogeneous runtime. “You don’t want to rebuild an application just because you’re deploying it on the edge.”

The edge also appeals to areas like gaming or hyper-personalized workloads, which often require extremely low latencies. But how can organizations move compute closer to the user while keeping deployment standards consistent across environments? “Having openness and a lack of managed services gives you that portability,” says Akamai’s Weil.

To make Kubernetes workloads portable for various clouds and regions, Weil recommends avoiding proprietary wrappers. Instead, he highlights the value of ‘golden path templates’ that retain compatibility with Kubernetes in its open-source form and build upon stable CNCF projects like OpenTelemetry, Argo, Prometheus, or Flatcar. This can simplify back-end orchestration, reduce vendor lock-in, and bring flexibility for multi-cloud, he says.

Options for smaller container deployments can also support internal experimentation. For instance, Jemiah Sius, head of developer relations at New Relic, describes how his team essentially works as a small engineering division in New Relic, using tools like MiniKube or K3s to host technical documentation or spin up short-lived prototypes or proofs of concepts during workshops in resource-constrained local environments.

According to Sius, Kubernetes is becoming more usable for these scenarios. He also believes it is becoming easier to adopt in general, with increased community support, improved documentation, and a more streamlined installation process. However, he notes that there is still a steep learning curve, particularly in understanding performance and managing data silos.

Lowering the barrier to Kubernetes

A decade in, Kubernetes remains complex, even using managed services. However, efforts are underway to build abstractions that improve developer experience and accessibility. “Kubernetes tooling is helping bridge skill gaps by providing intuitive interfaces and out-of-the-box insights, which lowers the expertise barrier,” says Komodor’s Shwartz.

Tools are also evolving to make cluster visibility more accessible, with one-step installations for observability functions. “Before, you had to actually augment your application code and manually add configurations by changing the manifest,” explains New Relic’s Sius. User-friendly observability options now help operators better understand the health of their systems and identify performance or reliability problems with far less effort.

AI and machine learning also are playing a role in enhancing user experience with Kubernetes management. For instance, Andreas Grabner, devops activist at Dynatrace and a CNCF ambassador, shares how generative AI-driven tools have made it easier to observe and diagnose Kubernetes clusters, aiding root cause analysis with practical information to optimize systems.

Kubernetes is cemented into the future of cloud-native infrastructure. In this world, edge will become a more common deployment pattern, and containerization will power the future of AI-native applications. To address cloud-native complexity, many enterprises have placed their hopes in platform engineering to consolidate tools and create paved roads that improve the usability of cloud-native architecture.

By 2025, 95% of applications will be deployed on cloud-native platforms, estimates Gartner. But while the cloud-native ecosystem seems at its zenith, plenty of enterprises still haven’t made the shift to containers, signaling room for growth.

There’s still plenty of room for exciting growth for Kubernetes. Growth not only in terms of adoption, but also the underlying platform, the tools and add-ons, and the use cases. Designed to orchestrate containers at massive scale, the descendant of Google’s Borg seems destined to become a standard deployment pattern for all kinds of workloads. In any case, the race between usability and complexity seems bound to continue.

Will Kubernetes become mature and easy in the coming years, or a rebellious and unwieldy teen? Time will tell.
https://www.infoworld.com/article/3812622/will-kubernetes-ever-get-easier.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Feb, Tue 11 - 02:07 CET