Recent

ads

Header Ads

Kubernetes: A simple overview

Get a basic understanding of Kubernetes and then go deeper with recommended resources.

This overview covers the basics of Kubernetes: what it is and what you need to keep in mind before applying it within your organization.

The information in this piece is curated from material available on the O’Reilly online learning platform and from interviews with Kubernetes experts.

What is Kubernetes?

First developed by Google, Kubernetes is an open source orchestrator for deploying containerized applications in a clustered environment. Kubernetes allows DevOps teams to automate container provisioning, networking, load balancing, security, and scaling across a cluster, says Sébastien Goasguen in his Kubernetes Fundamentals training course.

More specifically, Goasguen says, Kubernetes provides the software necessary “to schedule containers on those different machines and then enables service discovery so that the various services made out of the containers can find each other.”

This allows development teams to manage distributed applications in an automated manner.

The connection between containers and Kubernetes

To understand what Kubernetes is and does you need to first understand what containers are and why they exist.

The lifecycle of reliable and scalable applications delivered across the Internet presented new operational challenges for developers, engineers, and system operators. These challenges included service discovery, load balancing, health checks, storage management, workload scheduling, auto-scaling, and software packaging. Containers became a solution for addressing these issues and for deploying applications in a distributed manner.

A container consists of an entire runtime environment—application code plus everything needed to run it, such as system tools, libraries, and settings—all bundled into a single executable package. Containerizing an application and its dependencies helps abstract it from an operating system and infrastructure. It’s how an application can more easily move between different environments, such as a staging or production, and between public or private clouds.

Multiple containers working together form the basis of the microservices architecture that delivers a modern application or system at scale. As more containers are used, teams need a way to orchestrate these containers in a coordinated manner.

Enter Kubernetes, which organizes the containers making up an application into pods, nodes, and namespaces for easier management and discovery.

Benefits of Kubernetes

There are four primary benefits to using Kubernetes to manage containers in a distributed system, according to Kubernetes: Up and Running authors Brendan Burns, Kelsey Hightower, and Joe Beda. These benefits are: velocity, scaling, abstracting infrastructure, and efficiency.

Velocity

In today’s software development and delivery world, users expect an application to be available nearly 100% of the time. Applications can’t go down for maintenance. So, it’s important for teams to constantly push new features and updates without disruption to their services. This is what Burns, Hightower, and Beda mean by velocity.

They write:

“Velocity is measured not in terms of the raw number of features you can ship per hour or day, but rather in terms of the number of things you can ship while maintaining a highly available service. In this way, containers and Kubernetes can provide the tools that you need to move quickly, while staying available.”

Scaling

The components that make up a distributed application are split into small services. This microservices architecture—with APIs providing connective tissue between the services—allows for easier scaling of an application.

The containers making up the distributed application and the clusters that power those containers can be scaled more easily because of Kubernetes’ declarative nature. In a declarative system like Kubernetes, the engineer knows the system’s desired state and provides a representation of that state to the system through declarative configuration management. The desired state, as defined by the Kubernetes documentation, covers “what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more.”

Engineers configure the number of potential service copies (a running process on a cluster) that should run for the desired scale. Kubernetes handles this change in scaled state automatically—assuming there are enough resources within a given cluster to scale the container, according to the authors of Kubernetes: Up and Running. To scale the cluster, teams just need to start a new machine and join that new machine to the cluster.

Abstracting infrastructure

The ease and self-serve nature of public cloud infrastructure has made it a popular choice with organizations. However, it can be challenging to run a distributed application among multiple public cloud providers or in a mixed public-private cloud environment. Kubernetes helps address this problem. From Kubernetes: Up and Running:

“When your developers build their applications in terms of container images and deploy them in terms of portable Kubernetes APIs, transferring your application between environments, or even running in hybrid environments, is simply a matter of sending the declarative config to a new cluster. Kubernetes has a number of plug-ins that can abstract you from a particular cloud. For example, Kubernetes services know how to create load balancers on all major public clouds as well as several different private and physical infrastructures.”

Efficiency

Efficient use of infrastructure is important because there are a variety of hard and soft costs at play: power usage, cooling requirements, data center space, compute power, and ongoing maintenance are all factors. Kubernetes helps with efficiency and cost management with tools that automate the distribution of applications "across a cluster of machines, ensuring higher levels of utilization than are possible with traditional tooling," write the authors of Kubernetes: Up and Running.

The challenges of Kubernetes

While Kubernetes offers many advantages for deploying and managing distributed systems, it also comes with challenges. These include: Getting started and the learning curve, the pace and speed of the Kubernetes development cycle, and applying cloud native architectural changes to truly take advantage of Kubernetes’ best capabilities.

Getting started and Kubernetes’ learning curve

Toolkit discovery, setting up clusters, and the steep learning curve are big challenges many developers face when getting started with Kubernetes.

“When software development happens locally on a desktop, it’s a common workflow,” says Roland Huß, a software engineer and co-author of Kubernetes Patterns, in an interview. “When you move to Kubernetes, you have a lot more complexity to run an application. Ideally, tooling would do this for you. With Kubernetes there are a lot of new emerging toolkits and it’s not easy to discover the right tooling to get your application onto Kubernetes easily.”

And because Kubernetes is still in an adoption, development, and growth phase there’s no clear answer for this challenge yet, Huss says. “You can’t just push a button and get an application on Kubernetes,” he adds.

To get benefits from Kubernetes you have to work within a Kubernetes cluster, which sounds obvious, says Ewout Prangsma, senior developer at ArangoDB, in an interview. But there’s no effortless way to set up a cluster. “It’s much easier than a year ago,” Prangsma says, “and will likely continue getting easier, but you still have to do a lot and that’s a hassle.”

Beyond getting started, Kubernetes’ steep learning curve can pose a daunting challenge, says Michael Gasch, an application platform architect at VMware. New users can potentially struggle with upgrades, patches, and backups; they can also struggle to understand Kubernetes' fundamental architecture, as well as concepts like pods, replicasets, and deployments, Gasch says.

“There is so much overhead for developers that it doesn’t always justify using Kubernetes,” Gasch says. “Installation is a challenge, but the bigger complexity is understanding the entire ecosystem of Kubernetes.”

The Kubernetes development cycle

Though Kubernetes started life inside Google, it’s now a popular and essential open-source technology. As such, there are several large technology companies that have put their weight behind its development, including Microsoft, IBM, and Red Hat.

“Keeping up with the changes and new features has essentially become a full-time job,” says Prangsma. New features are released by the community on a monthly basis, and every quarter there is a significant update to Kubernetes.

“I’ve never seen a project move at this speed,” Prangsma says. “You are pretty much forced to update your platform twice a year as a result, at least, and that is definitely challenging.”

Prangsma adds that Kubernetes’ versioning API is well done, making it easier than it would be otherwise when troubleshooting updates and incompatibilities.

Getting the most from Kubernetes through cloud native adoption

Although Kubernetes can run anything that you can put in a container, that doesn't necessarily produce the best results, says John Arundel, co-author of Cloud Native DevOps with Kubernetes, in an interview.

“You can take a legacy Rails app, complete with its own stack, database, Redis instance, and so on, put that in a container and run it in your Kubernetes cluster,” he says. “While that will work, it doesn't make the best use of Kubernetes' capabilities.”

Getting the most from Kubernetes requires a cloud native approach. Developers need to re-architect an application with the cloud in mind, Arundel says, breaking it into small, manageable, co-operating components (known as microservices). But, this often brings its own challenges.

"Deciding what those microservices should be, where the boundaries are, and how the different services should interact is no easy problem,” Arundel says. “Good cloud native service design consists of making wise choices about how to separate the different parts of your architecture.”

However, even a well-designed cloud native application using containers and Kubernetes is still a distributed system, Arundel says, which makes it inherently complex, difficult to observe, and prone to failure in surprising ways.

None of these challenges are necessarily Kubernetes’ fault, says Prangsma. “Are these downsides to Kubernetes specifically? Maybe they are, maybe they aren’t. But we still have to deal with them as a result of using Kubernetes,” he says.

Learn more

Ready to go deeper into Kubernetes? Check out these recommended resources from O’Reilly’s editors.

Kubernetes: First Steps — Take your first steps with Kubernetes with these recommended resources from Kubernetes co-founder Brendan Burns.

Kubernetes Fundamentals — This Kubernetes crash course is designed for those new to Kubernetes and for people who want to sharpen their skills.

Kubernetes: Up & Running — This book shows you how Kubernetes and container technology fit into the lifecycle of a distributed application. You’ll learn how to use tools and APIs to automate scalable distributed systems.

Kubernetes Patterns — This guide provides common reusable elements, patterns, principles, and practices for designing and implementing cloud native applications on Kubernetes.

Managing Kubernetes — With this practical book, site reliability and DevOps engineers will learn how to build, operate, manage, and upgrade a Kubernetes cluster—whether it resides on cloud infrastructure or on-premises.

Kubernetes Cookbook — This book provides detailed solutions for installing, interacting with, and using Kubernetes in development and production.

Cloud Native DevOps with Kubernetes — In this pragmatic book, you’ll build an example cloud native application and its supporting infrastructure, along with a development environment and continuous deployment pipeline that you can use for your own applications.

Programming Kubernetes — This guide shows application and infrastructure developers, DevOps practitioners, and site reliability engineers how to develop cloud native apps that run on Kubernetes.

Continue reading Kubernetes: A simple overview.



from All - O'Reilly Media https://ift.tt/2A4GbzW
Kubernetes: A simple overview Kubernetes: A simple overview Reviewed by US Tech News on September 09, 2019 Rating: 5

No comments:

Follow Us

ads
Powered by Blogger.