Note: This page assumes that you’ve experimented with Kubernetes before. At this point, you should have basic experience interacting with a Kubernetes cluster (locally with Minikube, or elsewhere), and using API objects like Deployments to run your applications.
If not, you should review the Beginner App Developer topics first.
After checking out the current page and its linked sections, you should have a better understanding of the following:
- Additional Kubernetes workload patterns, beyond Deployments
- What it takes to make a Kubernetes application production-ready
- Community tools that can improve your development workflow
Learn additional workload patterns
As your Kubernetes use cases become more complex, you may find it helpful to familiarize yourself with more of the toolkit that Kubernetes provides. Basic workload objects like DeploymentsAn API object that manages a replicated application. make it straightforward to run, update, and scale applications, but they are not ideal for every scenario.
The following API objects provide functionality for additional workload types, whether they are persistent or terminating.
Like Deployments, these API objects run indefinitely on a cluster until they are manually terminated. They are best for long-running applications.
StatefulSetsManages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. - Like Deployments, StatefulSets allow you to specify that a certain number of replicas should be running for your application.Note: It’s misleading to say that Deployments can’t handle stateful workloads. Using PersistentVolumesAn API object that represents a piece of storage in the cluster. Available as a general, pluggable resource that persists beyond the lifecycle of any individual Pod. , you can persist data beyond the lifecycle of any individual Pod in your Deployment.
However, StatefulSets can provide stronger guarantees about “recovery” behavior than Deployments. StatefulSets maintain a sticky, stable identity for their Pods. The following table provides some concrete examples of what this might look like:
Deployment StatefulSet Example Pod name
When a Pod dies Reschedule on any node, with new name
Reschedule on same node, as
When a node becomes unreachable Pod(s) are scheduled onto new node, with new names Pod(s) are marked as “Unknown”, and aren’t rescheduled unless the Node object is forcefully deleted
In practice, this means that StatefulSets are best suited for scenarios where replicas (Pods) need to coordinate their workloads in a strongly consistent manner. Guaranteeing an identity for each Pod helps avoid split-brain side effects in the case when a node becomes unreachable (network partition). This makes StatefulSets a great fit for distributed datastores like Cassandra or Elasticsearch.
DaemonSetsEnsures a copy of a Pod is running across a set of nodes in a cluster. - DaemonSets run continuously on every node in your cluster, even as nodes are added or swapped in. This guarantee is particularly useful for setting up global behavior across your cluster, such as:
- Logging and monitoring, from applications like
- Network proxy or service mesh
- Logging and monitoring, from applications like
In contrast to Deployments, these API objects are finite. They stop once the specified number of Pods have completed successfully.
JobsA finite or batch task that runs to completion. - You can use these for one-off tasks like running a script or setting up a work queue. These tasks can be executed sequentially or in parallel. These tasks should be relatively independent, as Jobs do not support closely communicating parallel processes. Read more about Job patterns.
CronJobsManages a Job that runs on a periodic schedule. - These are similar to Jobs, but allow you to schedule their execution for a specific time or for periodic recurrence. You might use CronJobs to send reminder emails or to run backup jobs. They are set up with a similar syntax as crontab.
There may be additional features not mentioned here that you may find useful, which are covered in the full Kubernetes documentation.
Deploy a production-ready workload
The beginner tutorials on this site, such as the Guestbook app, are geared towards getting workloads up and running on your cluster. This prototyping is great for building your intuition around Kubernetes! However, in order to reliably and securely promote your workloads to production, you need to follow some additional best practices.
You are likely interacting with your Kubernetes cluster via kubectlA command line tool for communicating with a Kubernetes API server.
. kubectl can be used to debug the current state of your cluster (such as checking the number of nodes), or to modify live Kubernetes objects (such as updating a workload’s replica count with
When using kubectl to update your Kubernetes objects, it’s important to be aware that different commands correspond to different approaches:
- Purely imperative
- Imperative with local configuration files (typically YAML)
- Declarative with local configuration files (typically YAML)
There are pros and cons to each approach, though the declarative approach (such as
kubectl apply -f) may be most helpful in production. With this approach, you rely on local YAML files as the source of truth about your desired state. This enables you to version control your configuration, which is helpful for code reviews and audit tracking.
For additional configuration best practices, familiarize yourself with this guide.
You may be familiar with the principle of least privilege—if you are too generous with permissions when writing or using software, the negative effects of a compromise can escalate out of control. Would you be cautious handing out
sudo privileges to software on your OS? If so, you should be just as careful when granting your workload permissions to the Kubernetes APIThe application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.
server! The API server is the gateway for your cluster’s source of truth; it provides endpoints to read or modify cluster state.
You (or your cluster operatorA person who configures, controls, and monitors clusters. ) can lock down API access with the following:
- ServiceAccountsProvides an identity for processes that run in a Pod. - An “identity” that your Pods can be tied to
- RBACManages authorization decisions, allowing admins to dynamically configure access policies through the Kubernetes API. - One way of granting your ServiceAccount explicit permissions
For even more comprehensive reading about security best practices, consider checking out the following topics:
- Authentication (Is the user who they say they are?)
- Authorization (Does the user actually have permissions to do what they’re asking?)
Resource isolation and management
If your workloads are operating in a multi-tenant environment with multiple teams or projects, your container(s) are not necessarily running alone on their node(s). They are sharing node resources with other containers which you do not own.
Even if your cluster operator is managing the cluster on your behalf, it is helpful to be aware of the following:
- NamespacesAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. , used for isolation
- Resource quotas, which affect what your team’s workloads can use
- Memory and CPU requests, for a given Pod or container
- Monitoring, both on the cluster level and the app level
This list may not be completely comprehensive, but many teams have existing processes that take care of all this. If this is not the case, you’ll find the Kubernetes documentation fairly rich in detail.
Improve your dev workflow with tooling
As an app developer, you’ll likely encounter the following tools in your workflow.
kubectl is a command-line tool that allows you to easily read or modify your Kubernetes cluster. It provides convenient, short commands for common operations like scaling app instances and getting node info. How does kubectl do this? It’s basically just a user-friendly wrapper for making API requests. It’s written using client-go, the Go library for the Kubernetes API.
To learn about the most commonly used kubectl commands, check out the kubectl cheatsheet. It explains topics such as the following:
- kubeconfig files - Your kubeconfig file tells kubectl what cluster to talk to, and can reference multiple clusters (such as dev and prod).
The various output formats available - This is useful to know when you are using
kubectl getto list information about certain API objects.
The JSONPath output format - This is related to the output formats above. JSONPath is especially useful for parsing specific subfields out of
kubectl getoutput (such as the URL of a ServiceAn API object that describes how to access applications, such as a set of Pods, and can describe ports and load-balancers. ).
For the full list of kubectl commands and their options, check out the reference guide.
To leverage pre-packaged configurations from the community, you can use Helm chartsA package of pre-configured Kubernetes resources that can be managed with the Helm tool. .
Helm charts package up YAML configurations for specific apps like Jenkins and Postgres. You can then install and run these apps on your cluster with minimal extra configuration. This approach makes the most sense for “off-the-shelf” components which do not require much custom implementation logic.
For writing your own Kubernetes app configurations, there is a thriving ecosystem of tools that you may find useful.
Explore additional resources
Now that you’re fairly familiar with Kubernetes, you may find it useful to browse the following reference pages. Doing so provides a high level view of what other features may exist:
In addition, the Kubernetes Blog often has helpful posts on Kubernetes design patterns and case studies.
If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys:
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.