1
I Use This!
Activity Not Available

News

Posted over 7 years ago by Kirill
In previous tutorials, we've already discussed how to deploy and autoscale Kubernetes clusters on GCE, DigitalOcean, OpenStack and Amazon AWS. It's now time to walk you through the same process for Packet.net -- a cloud provider ... [More] for which support was added in the recent Supergiant releases. By the end of this tutorial, you'll know how to link Packet.net cloud account and leverage Supergiant's autoscaling packing algorithm to deploy the cluster and minimize cloud costs. Sounds promising? It actually is! What is Packet.net? Packet.net is a cloud provider that offers fully isolated dedicated servers on a pay-as-you-go (PAYG) scheme. In this way, the platform combines the provision of the bare-metal servers with a flexibility and scalability of a cloud. Packet.net uses premium hardware provided by Dell, SuperMicro, and Quanta to provision multiple flavors of machines that can be used both for development and large-scale production. Prerequisites For this tutorial, we will need: A Packet.net account and the capacity to deploy at least t1.small (former Type 0 ) machine instances with 8GB of RAM and 4 CPU cores. This is the minimal requirement for instantiation of Kubernetes master and nodes on Packet.net. A running Supergiant server (see the official Supergiant GitHub repo for installation details). Supergiant supports the following installation options: A virtual machine (VM) hosted by a cloud provider (e.g., Packet) An on-premise VM An on-premise bare-metal server A Packet.net API Token. The token is needed to identify the user/s on behalf of whom Supergiant connects to the Packet API. An operational Packet Project to deploy a Kubernetes cluster to. Linking Packet.net Cloud Account Linking Packet's cloud account takes a few simple steps. First, click 'Add a Cloud Account' in your Supergiant Dashboard like in the image below. This will transfer you to the list of available cloud providers (see the image below). On the list page, select Packet.net, and click on it. You'll be sent to a form with Packet.net cloud credentials. As you can see, the only cloud credential required is an API Token needed to allow Supergiant to communicate with your Packet.net account on your behalf. Getting this API token in Packet.net is plain and simple: under your profile select API Keys, and create new/use the existing one. (Note: you need to have an API Key with the Read/Write permissions.) After you obtain the token, paste it into the corresponding field of your Packet.net cloud credentials, create a user-friendly name for your cloud account (for example, "packet"), and click the "Submit" button. Supergiant will automatically link Packet's cloud account using the token. In a while, you'll see the newly added Packet.net account in the cloud accounts list. Deploying Kubernetes Cluster Now, as we've linked a brand new Packet.net account, we can deploy a Kubernetes cluster to it. To begin deploying a Kube, first click 'Launch a Cluster' or 'Take me to Clusters' from the Supergiant's dashboard. Assuming no clusters have been deployed yet, you'll see a 'Deploy your First Cluster' button. Click on it to open a list of available cloud accounts, and then select your Packet.net account. This will send you to the Kube's configuration page that includes two sections: general cluster configuration and Packet-specific settings. Make sure that you understand what each parameter discussed below means before deploying the Kube to Packet.net. General Configuration Master Node Size -- the size of the server that will run as a Kubernetes Master. As a default value, we use Type 0 Packet.net servers, which are cost-effective bare-metal machines with resources sufficient for running Kubernetes masters. Kube Master Count -- a number of Kube masters. The default value is 1. SSH Pub Key -- an SSH public key to securely connect to the cluster. You can access existing SSH keys or add new SSH keys to your Packet.net account under your Profile's dropdown menu. Node Sizes -- the array of node size for your Packet.net cluster. Each node size maps to the Packet.net bare-metal machine type that will be used by Supergiant's packing algorithm to autoscale your Kubernetes cluster so that it always has sufficient resources to run your applications. Default values for node sizes are Type 0, Type 1, Type 2, Type 3, and Type 2a.  Note: Packet.net renamed some of its instance types, however, Supergiant always maps conventional names to the most recent naming standards so users shouldn't bother about that. If you expect your cluster to scale to large workloads, be sure to include all available machine options specified in this list: "packet": [ {"name": "t1.small (Type 0)", "ram_gib": 8, "cpu_cores": 4}, {"name": "c1.small (Type 1)", "ram_gib": 32, "cpu_cores": 4}, {"name": "m1.xlarge (Type 2)", "ram_gib": 256, "cpu_cores": 24}, {"name": "c1.large.arm (Type 2A)", "ram_gib": 128, "cpu_cores": 96}, {"name": "c1.xlarge (Type 3)", "ram_gib": 128, "cpu_cores": 16} ] For more information, see a full list of Packet bare-metal servers. Packet Configuration Facility -- A Packet.net region in which the cluster will be created. The default region is ewr1. For a more detailed description of available Packet regions see their official documentation.Project -- A Packet.net Project in which the cluster will be created. The field accepts a Project UUID (e.g 45acaac1-8adg-7a4f-43da-1aba2183da30 ). The Project's UUID can be obtained under Project Settings of your Packet.net control panel. Watch Supergiant Go! After you have configured the deployment, click 'Submit' and Supergiant will start provisioning your Kube. Supergiant will orchestrate provisioning of all Packet.net services including block storage, networking, and security, which may take up to 5-7 minutes. Once the process is completed, you'll see the status of the cluster changed to 'Running' and the newly created master and node instances in your Packet.net console. You can now also access the cluster's stats via Supergiant's monitoring dashboard. There you'll see the amount of RAM and CPU used and a raw data describing Kubernetes API and the deployment's details. Deleting the Kube If you want to delete your Packet.net Kube, from the Supergiant dashboard access "Clusters", select the one you want to delete, and then click "Delete Selected Cluster" button. The cluster's status will be changed to "Deleting" and, in a while, Supergiant will clean up the Packet.net instances and all services linked to the Kube. Conclusion As you've seen, linking Packet.net cloud account and deploying a Kube to it is plain and simple. Getting your Packet.net API token and editing the cluster's configuration are two steps away from having a reliable Kubernetes cluster deployed to your Packet servers. Stay tuned to the Supergiant blogs and tutorials, and you'll soon have more useful information on managing your Kubernetes clusters. [Less]
Posted over 7 years ago by Kirill
In previous tutorials, we've already discussed how to deploy and autoscale Kubernetes clusters on GCE, DigitalOcean, OpenStack and Amazon AWS. It's now time to walk you through the same process for Packet.net -- a cloud provider ... [More] for which support was added in the recent Supergiant releases. By the end of this tutorial, you'll know how to link Packet.net cloud account and leverage Supergiant's autoscaling packing algorithm to deploy the cluster and minimize cloud costs. Sounds promising? It actually is! What is Packet.net? Packet.net is a cloud provider that offers fully isolated dedicated servers on a pay-as-you-go (PAYG) scheme. In this way, the platform combines the provision of the bare-metal servers with a flexibility and scalability of a cloud. Packet.net uses premium hardware provided by Dell, SuperMicro, and Quanta to provision multiple flavors of machines that can be used both for development and large-scale production. Prerequisites For this tutorial, we will need: A Packet.net account and the capacity to deploy at least t1.small (former Type 0 ) machine instances with 8GB of RAM and 4 CPU cores. This is the minimal requirement for instantiation of Kubernetes master and nodes on Packet.net. A running Supergiant server (see the official Supergiant GitHub repo for installation details). Supergiant supports the following installation options: A virtual machine (VM) hosted by a cloud provider (e.g., Packet) An on-premise VM An on-premise bare-metal server A Packet.net API Token. The token is needed to identify the user/s on behalf of whom Supergiant connects to the Packet API. An operational Packet Project to deploy a Kubernetes cluster to. Linking Packet.net Cloud Account Linking Packet's cloud account takes a few simple steps. First, click 'Add a Cloud Account' in your Supergiant Dashboard like in the image below. This will transfer you to the list of available cloud providers (see the image below). On the list page, select Packet.net, and click on it. You'll be sent to a form with Packet.net cloud credentials. As you can see, the only cloud credential required is an API Token needed to allow Supergiant to communicate with your Packet.net account on your behalf. Getting this API token in Packet.net is plain and simple: under your profile select API Keys, and create new/use the existing one. (Note: you need to have an API Key with the Read/Write permissions.) After you obtain the token, paste it into the corresponding field of your Packet.net cloud credentials, create a user-friendly name for your cloud account (for example, "packet"), and click the "Submit" button. Supergiant will automatically link Packet's cloud account using the token. In a while, you'll see the newly added Packet.net account in the cloud accounts list. Deploying Kubernetes Cluster Now, as we've linked a brand new Packet.net account, we can deploy a Kubernetes cluster to it. To begin deploying a Kube, first click 'Launch a Cluster' or 'Take me to Clusters' from the Supergiant's dashboard. Assuming no clusters have been deployed yet, you'll see a 'Deploy your First Cluster' button. Click on it to open a list of available cloud accounts, and then select your Packet.net account. This will send you to the Kube's configuration page that includes two sections: general cluster configuration and Packet-specific settings. Make sure that you understand what each parameter discussed below means before deploying the Kube to Packet.net. General Configuration Master Node Size -- the size of the server that will run as a Kubernetes Master. As a default value, we use Type 0 Packet.net servers, which are cost-effective bare-metal machines with resources sufficient for running Kubernetes masters. Kube Master Count -- a number of Kube masters. The default value is 1. SSH Pub Key -- an SSH public key to securely connect to the cluster. You can access existing SSH keys or add new SSH keys to your Packet.net account under your Profile's dropdown menu. Node Sizes -- the array of node size for your Packet.net cluster. Each node size maps to the Packet.net bare-metal machine type that will be used by Supergiant's packing algorithm to autoscale your Kubernetes cluster so that it always has sufficient resources to run your applications. Default values for node sizes are Type 0, Type 1, Type 2, Type 3, and Type 2a.  Note: Packet.net renamed some of its instance types, however, Supergiant always maps conventional names to the most recent naming standards so users shouldn't bother about that. If you expect your cluster to scale to large workloads, be sure to include all available machine options specified in this list: "packet": [ {"name": "t1.small (Type 0)", "ram_gib": 8, "cpu_cores": 4}, {"name": "c1.small (Type 1)", "ram_gib": 32, "cpu_cores": 4}, {"name": "m1.xlarge (Type 2)", "ram_gib": 256, "cpu_cores": 24}, {"name": "c1.large.arm (Type 2A)", "ram_gib": 128, "cpu_cores": 96}, {"name": "c1.xlarge (Type 3)", "ram_gib": 128, "cpu_cores": 16} ] For more information, see a full list of Packet bare-metal servers. Packet Configuration Facility -- A Packet.net region in which the cluster will be created. The default region is ewr1. For a more detailed description of available Packet regions see their official documentation.Project -- A Packet.net Project in which the cluster will be created. The field accepts a Project UUID (e.g 45acaac1-8adg-7a4f-43da-1aba2183da30 ). The Project's UUID can be obtained under Project Settings of your Packet.net control panel. Watch Supergiant Go! After you have configured the deployment, click 'Submit' and Supergiant will start provisioning your Kube. Supergiant will orchestrate provisioning of all Packet.net services including block storage, networking, and security, which may take up to 5-7 minutes. Once the process is completed, you'll see the status of the cluster changed to 'Running' and the newly created master and node instances in your Packet.net console. You can now also access the cluster's stats via Supergiant's monitoring dashboard. There you'll see the amount of RAM and CPU used and a raw data describing Kubernetes API and the deployment's details. Deleting the Kube If you want to delete your Packet.net Kube, from the Supergiant dashboard access "Clusters", select the one you want to delete, and then click "Delete Selected Cluster" button. The cluster's status will be changed to "Deleting" and, in a while, Supergiant will clean up the Packet.net instances and all services linked to the Kube. Conclusion As you've seen, linking Packet.net cloud account and deploying a Kube to it is plain and simple. Getting your Packet.net API token and editing the cluster's configuration are two steps away from having a reliable Kubernetes cluster deployed to your Packet servers. Stay tuned to the Supergiant blogs and tutorials, and you'll soon have more useful information on managing your Kubernetes clusters. [Less]
Posted over 7 years ago by Kirill
The Kubernetes resource model is designed to optimize resource utilization by containers and to ensure efficient scheduling of Pods and high availability of applications. The core of the Kubernetes approach is resource requests ... [More] and limits defined for containers during a Pod's creation. In this tutorial, we shall describe the inner workings of the Kubernetes resource model and walk you through assigning compute resources (CPU and RAM) to containers using Kubernetes native tools and API. We shall also discuss how resources can be assigned using the Supergiant platform that provides a Kubernetes-as-a-Service solution. We hope this tutorial will give you a deeper knowledge of how to assign resources to containers using both Kubernetes native tools (kubectl) and Supergiant. How Does the Kubernetes Resource Allocation Model Work? In Kubernetes, you can assign resources to containers by specifying CPU and memory (RAM) requests and limits for each container in a Pod. These values can be then used by the scheduler to choose the right node to place your Pods on. Kubernetes documentation defines a resource request as the amount of computing resource (CPU or memory) Kubernetes will guarantee to the container. Correspondingly, a resource limit is defined as the maximum amount of resources Kubernetes will allow the container to consume. It's important to note that Kubernetes decides whether the Pod can be scheduled on a given node by computing the sum of all requests and limits by containers in that Pod. That is, the Pod may not be scheduled on the Node if at least one of its containers fails to match its resource request with the allocatable maximum on a Node. It is a task of both the scheduler and kubelet to make sure that the sum of all requests by all containers is within the node's capacity for two types of resources (both CPU and memory). Calculation of Resource Requests and Limits Requests specified by the container should be equal or greater than 0 and not be greater than the Node Allocatable capacity. This rule may be summed up by the following formula: 0 <= request <= Node Allocatable. In its turn, a limit should be equal or greater than the request and can have no upper bounds: request <= limit <= Infinity. One should remember, though, that scheduling Pods depends on requests, not limits. In other words, a Pod can be scheduled on the Node if its resource requests are within the Node's allocatable or available capacity even if its limits exceed this capacity. Also, note that even though the actual memory or CPU utilization on nodes may be low, the scheduler will still refuse a Pod to be placed on a node if its resource requests are greater than the Node's capacity. This feature protects the system against the resource shortage in case of traffic spikes. Kubernetes Resource Types Kubernetes abstracts computing resources from the underlying processor architectures, exposing them on-demand in raw values or base units. For CPU resource these base units are units of cores and for memory -- units of bytes. A memory resource can be specified as a plain integer or as a fixed-point integer using such suffixes as E, P, T, G, M, K. In its turn, one CPU is equivalent to: 1 AWS vCPU 1 GCP Core 1 Azure vCore 1 Hyperthread on an Intel processor that has Hyperthreading Kubernetes allows specifying CPU values in fractional quantities (e.g., 0.5 CPU). For example, a container with the resource request of 0.5 CPU is guaranteed half of the CPU. 0.5 CPU is also equivalent to 500m, which stands for "five hundred millicpu" or "five hundred minicores". It is noteworthy that, in Kubernetes, the CPU resource is always requested in absolute quantities, meaning that 0.1 is the same amount of CPU on a single core, dual core, or any other machine. Pod Classes Depending on Resource Requests and Limits Depending on a min/max ratio of resource requests and limits, we can end up with three distinct classes of Pods: guaranteed Pods, burstable Pods, and best-effort Pods. Guaranteed Pods A Pod is regarded as guaranteed If limits and optionally requests (not equal to 0) are set for all resources across all containers and they are equal. Example: containers: name: first resources: limits: cpu: 20m memory: 1Gi requests: cpu: 20m memory: 1Gi name: second resources: limits: cpu: 300m memory: 400Mi requests: cpu: 300m memory: 400Mi In this example, both the 'first' and the 'second' containers in the Pod have equal values for requests and limits correspondingly. This makes the Pod a guaranteed one. Burstable Pods A Pod is treated as burstable If requests and optionally limits are set (not equal to 0) for one or more resources in one or more containers, and they are not equal. When limits are not specified, Pods can use as many resources as the Node can allocate. Example: containers: name: first resources: limits: memory: 2Gi requests: memory: 1Gi name: second resources: limits: cpu: 500m requests: cpu: 300m In this example, containers 'first' and 'second' have different limits and resources set for different resources. This makes a Pod burstable. Best-Effort Pods A Pod is treated as a best-effort if neither requests nor limits are set for all the resources, across all containers. Example: containers: name: first resources: name: second resources: As we see, in this example, neither "first" nor "second" container have their resources specified. Kubernetes grants different resource rights and priorities to the above-described Pod classes. Best-effort Pods are of the lowest priority, and they are the first candidates for eviction if the system runs out of memory. In their turn, Guaranteed Pods have the highest priority and are guaranteed not to be killed or throttled unless limits are exceeded and if there are no lower priority containers to be removed. Finally, Burstable Pods enjoy minimal resource guarantees but are allowed to use more computing resources if available. If no Best-Effort Pods are available, Burstable Pods will be the first to be killed if the cluster is under capacity. The following rules apply to all Pods regardless of their class: containers can exceed their memory requests if the memory is available on the Node. However, they are not allowed to use more memory than specified in the memory limit. Such a container becomes a candidate for termination. If a terminated container is restartable, the kubelet will take care of it. containers can or cannot exceed their CPU limit for an extended period of time. Whether or not containers run past their limit in CPU is up to the container runtime chosen (e.g., Docker, rkt). Thus, some containers can use more than their limit for a short time while others may not be able to do so. Whether or not containers have "burstable" CPU is up to how much free CPU is currently available on the node. If other containers are competing for those resources, the container will be brought back down to its request. If no one else wants those resources, the container can use as much CPU as specified in its limit. Why Use Resource Requests and Limits at All? We now have a basic understanding of how resource requests and limits affect the destiny of Pods. However, why should we use them at all? Securing efficient consumption of computing resources and ensuring that high-priority Pods are always running are two key motivations for using Kubernetes resource requests and limits. More specifically: Pods with low CPU and memory requests have a good chance of being scheduled. If you set resource limits higher than resource requests, you can create Pods that can burst whenever CPU and memory resources are available. At the same time, having resource limits guarantees that resources used during a burst are limited to some amount. Users can, however, run Pods without specifying resource requests and limits. In this case, the following rules apply: The container can use all resources on the Node if no resource limits are specified. If the container runs in a namespace with a default memory or CPU limit assigned to the container, then the cluster admin can use a LimitRange to specify a default value for the memory limit for all containers. Note: If you want to find out how Supergiant further extends Kubernetes resource model with its cost-effective auto scaling algorithm, you will want to read this article. Tutorial In what follows, we are going to show how you can easily assign resource requests and limits to containers in a Pod using Kubernetes native tools. To complete this tutorial, you'll need the following prerequisites: A running Kubernetes cluster. See Supergiant GitHub wiki for the details on deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster locally using Minikube. A kubectl CLI installed and configured to communicate with the cluster. See how to install kubectl here. A Heapster service running in your cluster. Note: Supergiant clusters are deployed with Heapster service by default. You can verify whether Heapster is running with kubectl get services --namespace=kube-system, which should produce a similar output if you are running a cluster deployed with Supergiant: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 10.3.106.144 80/TCP 5d kube-dns ClusterIP 10.3.0.10 53/UDP,53/TCP 5d monitoring-influxdb ClusterIP 10.3.70.70 8083/TCP,8086/TCP 5d tiller-deploy ClusterIP 10.3.195.104 44134/TCP 5d We are now set and can proceed to assigning resources to our Kubernetes containers. Step 1: Create a new namespace Creating a new namespace is a good practice that helps isolate computing resources and Pods used in a project from the rest of the cluster. We can create a new namespace for this tutorial using the following command: kubectl create namespace assigning-resources-tut If it works out, the console will produce the following output: namespace "assigning-resources-tut" created Step 2: Specify CPU requests and limits For this tutorial we are creating a single-container Pod,  which is the most common type of Pods in Kubernetes. A container image used is NGINX web server accessed from the Docker Hub container repository. In order to specify a CPU and memory request for a given container, use spec.containers.resources.requests field in the Pod manifest. For resource limits, we use spec.containers.resources.limits field. We decided to set our resource request for the NGINX container at 0.5 CPU (or 500 millicpus) and a CPU limit at 1 CPU. Correspondingly, we have a memory request of 500 MiB and a memory limit of 700 MiB for this container. Here’s the configuration file for the Pod: apiVersion: v1 # Might differ depending on the Kubernetes API version you use kind: Pod metadata: name: test-pod namespace: assigning-resources-tut spec: containers: - name: nginx image: nginx:latest resources: limits: cpu: "1" memory: "700Mi" requests: cpu: "0.5" memory: "500Mi" Step 3: Create a Pod To create a Pod, first save the configuration above in a file (e.g., test-pod.yaml), and run the following command. (Note: use your path to the file.) kubectl create -f test-pod.yaml --namespace=assigning-resources-tut Step 4: Check whether the pod's container is running in our namespace To accomplish this, use get pod command with the Pod's name and -- namespace argument set to "assigning-resources-tut" like this: kubectl get pod test-pod --namespace=assigning-resources-tut You should see the following output: NAME READY STATUS RESTARTS AGE test-pod 1/1 Running 0 8s It indicates that the test-pod is running with no restarts and has the age of 8 seconds. You can also check a detailed information about the Pod using the following command outputting the Pod's data in the YAML format: kubectl get pod test-pod --output=yaml --namespace=assigning-resources-tut Among other things, this output shows that the container is running with resource limits and requests we specified, so everything works as expected. spec: containers: - image: nginx:latest imagePullPolicy: Always name: nginx resources: limits: cpu: "1" memory: 700Mi requests: cpu: 500m memory: 500Mi Step 5: Checking the actual resource usage It's very convenient to be able to track the resources your Pod is actually consuming. We can use the Heapster service to check this. Heapster enables Cluster Monitoring and Performance Analysis for Kubernetes and is installed by default on clusters deployed with Supergiant. To use Heapster, we should first start a proxy: kubectl proxy The proxy will start in the current terminal window, so you should open another terminal to use Heapster. Now, to get a CPU usage rate run, in a new terminal window, run: curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/assigning-resources-tut/pods/test-pod/metrics/cpu/usage_rate As you see, we are curling the Kubernetes API cpu/usage_rate endpoint that refers to our namespace and the "test-pod" running in it. You should get output similar to this: { "metrics": [ { "timestamp": "2018-05-13T21:43:40Z", "value": 0.1 }, { "timestamp": "2018-05-13T21:44:40Z", "value": 0.2 }, { "timestamp": "2018-05-13T21:45:00Z", "value": 0.22 } It shows a series of timestamps and corresponding CPU usage values for each timestamp, which helps us track the CPU usage dynamics over time. Similarly, to get a memory usage rate, we can run a command with a memory/usage endpoint: curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/assigning-resources-tut/pods/test-pod/metrics/memory/usage The output below indicates that the Pod is using 13246454 of RAM. { "timestamp": "2018-05-13T22:00:40Z", "value": 13246464 }, { "timestamp": "2018-05-13T22:01:00Z", "value": 13246464 } Step 6: Deleting the pod Now, as our tutorial is over, let's clean up the cluster by deleting the Pod. kubectl delete pod test-pod --namespace=assigning-resources-tut That's it! You've learned how to assign resources to containers in a Pod. It's as simple as that! Managing Resource Requests and Limits with Supergiant Supergiant is a very flexible system that combines an easy-to-use UI for cluster and resources deployment with the access to the low-level Kubernetes APIs and tools. For example, once you've deployed a Kube with Supergiant, you can use kubectl on the master the same way as described in this tutorial. However, what if you don't have time to learn kubectl and low-level Kubernetes API but still want to set resources limits and requests for containers in your Pod? Supergiant solves this problem for you by providing access to ~160 Helm charts in the /stable Kubernetes Helm repository. This repository stores curated and well-tested Helm charts, which dramatically simplifies deployment of apps in Kubernetes (e.g see a partial view of Helm charts in the image below). Unfortunately, not all of these charts expose the ability to set resource requests and limits. However, if they do, you can use Supergiant's Dashboard to configure them. One of the charts that offer such functionality is Sumokube, a hosted logging platform by Sumologic. To find this app, click "Apps" in the main navigation menu and then "Deploy New App". Enter "Sumokube" in the search field and select the chart to open its configuration. As you see, the chart contains default values for resource requests and limits which can be easily changed to match your needs. After all edits are made, just click "Submit" and the app will be deployed on your cluster within a minute or so. As we've mentioned, not all helm charts expose resource requests and limits. However, you can always add your custom Helm repositories to Supergiant or use native Kubernetes tools like kubectl discussed above. Conclusion We hope that this article shed some light on how the Kubernetes resource model actually works. As we've learned, the concepts of resource requests and limits are quite simple yet powerful. Using them reasonably can help you control how your Pods consume computing resources in the cluster. Kubernetes offers you a flexibility to define high-priority and low-priority Pods distributing limited resources in the most efficient way, ensuring that the most critical applications and containers are always running notwithstanding unexpected traffic spikes and various node-level failures. [Less]
Posted over 7 years ago by Kirill
The Kubernetes resource model is designed to optimize resource utilization by containers and to ensure efficient scheduling of Pods and high availability of applications. The core of the Kubernetes approach is resource requests ... [More] and limits defined for containers during a Pod's creation. In this tutorial, we shall describe the inner workings of the Kubernetes resource model and walk you through assigning compute resources (CPU and RAM) to containers using Kubernetes native tools and API. We shall also discuss how resources can be assigned using the Supergiant platform that provides a Kubernetes-as-a-Service solution. We hope this tutorial will give you a deeper knowledge of how to assign resources to containers using both Kubernetes native tools (kubectl) and Supergiant. How Does the Kubernetes Resource Allocation Model Work? In Kubernetes, you can assign resources to containers by specifying CPU and memory (RAM) requests and limits for each container in a Pod. These values can be then used by the scheduler to choose the right node to place your Pods on. Kubernetes documentation defines a resource request as the amount of computing resource (CPU or memory) Kubernetes will guarantee to the container. Correspondingly, a resource limit is defined as the maximum amount of resources Kubernetes will allow the container to consume. It's important to note that Kubernetes decides whether the Pod can be scheduled on a given node by computing the sum of all requests and limits by containers in that Pod. That is, the Pod may not be scheduled on the Node if at least one of its containers fails to match its resource request with the allocatable maximum on a Node. It is a task of both the scheduler and kubelet to make sure that the sum of all requests by all containers is within the node's capacity for two types of resources (both CPU and memory). Calculation of Resource Requests and Limits Requests specified by the container should be equal or greater than 0 and not be greater than the Node Allocatable capacity. This rule may be summed up by the following formula: 0 <= request <= Node Allocatable. In its turn, a limit should be equal or greater than the request and can have no upper bounds: request <= limit <= Infinity. One should remember, though, that scheduling Pods depends on requests, not limits. In other words, a Pod can be scheduled on the Node if its resource requests are within the Node's allocatable or available capacity even if its limits exceed this capacity. Also, note that even though the actual memory or CPU utilization on nodes may be low, the scheduler will still refuse a Pod to be placed on a node if its resource requests are greater than the Node's capacity. This feature protects the system against the resource shortage in case of traffic spikes. Kubernetes Resource Types Kubernetes abstracts computing resources from the underlying processor architectures, exposing them on-demand in raw values or base units. For CPU resource these base units are units of cores and for memory -- units of bytes. A memory resource can be specified as a plain integer or as a fixed-point integer using such suffixes as E, P, T, G, M, K. In its turn, one CPU is equivalent to: 1 AWS vCPU 1 GCP Core 1 Azure vCore 1 Hyperthread on an Intel processor that has Hyperthreading Kubernetes allows specifying CPU values in fractional quantities (e.g., 0.5 CPU). For example, a container with the resource request of 0.5 CPU is guaranteed half of the CPU. 0.5 CPU is also equivalent to 500m, which stands for "five hundred millicpu" or "five hundred minicores". It is noteworthy that, in Kubernetes, the CPU resource is always requested in absolute quantities, meaning that 0.1 is the same amount of CPU on a single core, dual core, or any other machine. Pod Classes Depending on Resource Requests and Limits Depending on a min/max ratio of resource requests and limits, we can end up with three distinct classes of Pods: guaranteed Pods, burstable Pods, and best-effort Pods. Guaranteed Pods A Pod is regarded as guaranteed If limits and optionally requests (not equal to 0) are set for all resources across all containers and they are equal. Example: containers: name: first resources: limits: cpu: 20m memory: 1Gi requests: cpu: 20m memory: 1Gi name: second resources: limits: cpu: 300m memory: 400Mi requests: cpu: 300m memory: 400Mi In this example, both the 'first' and the 'second' containers in the Pod have equal values for requests and limits correspondingly. This makes the Pod a guaranteed one. Burstable Pods A Pod is treated as burstable If requests and optionally limits are set (not equal to 0) for one or more resources in one or more containers, and they are not equal. When limits are not specified, Pods can use as many resources as the Node can allocate. Example: containers: name: first resources: limits: memory: 2Gi requests: memory: 1Gi name: second resources: limits: cpu: 500m requests: cpu: 300m In this example, containers 'first' and 'second' have different limits and resources set for different resources. This makes a Pod burstable. Best-Effort Pods A Pod is treated as a best-effort if neither requests nor limits are set for all the resources, across all containers. Example: containers: name: first resources: name: second resources: As we see, in this example, neither "first" nor "second" container have their resources specified. Kubernetes grants different resource rights and priorities to the above-described Pod classes. Best-effort Pods are of the lowest priority, and they are the first candidates for eviction if the system runs out of memory. In their turn, Guaranteed Pods have the highest priority and are guaranteed not to be killed or throttled unless limits are exceeded and if there are no lower priority containers to be removed. Finally, Burstable Pods enjoy minimal resource guarantees but are allowed to use more computing resources if available. If no Best-Effort Pods are available, Burstable Pods will be the first to be killed if the cluster is under capacity. The following rules apply to all Pods regardless of their class: containers can exceed their memory requests if the memory is available on the Node. However, they are not allowed to use more memory than specified in the memory limit. Such a container becomes a candidate for termination. If a terminated container is restartable, the kubelet will take care of it. containers can or cannot exceed their CPU limit for an extended period of time. Whether or not containers run past their limit in CPU is up to the container runtime chosen (e.g., Docker, rkt). Thus, some containers can use more than their limit for a short time while others may not be able to do so. Whether or not containers have "burstable" CPU is up to how much free CPU is currently available on the node. If other containers are competing for those resources, the container will be brought back down to its request. If no one else wants those resources, the container can use as much CPU as specified in its limit. Why Use Resource Requests and Limits at All? We now have a basic understanding of how resource requests and limits affect the destiny of Pods. However, why should we use them at all? Securing efficient consumption of computing resources and ensuring that high-priority Pods are always running are two key motivations for using Kubernetes resource requests and limits. More specifically: Pods with low CPU and memory requests have a good chance of being scheduled. If you set resource limits higher than resource requests, you can create Pods that can burst whenever CPU and memory resources are available. At the same time, having resource limits guarantees that resources used during a burst are limited to some amount. Users can, however, run Pods without specifying resource requests and limits. In this case, the following rules apply: The container can use all resources on the Node if no resource limits are specified. If the container runs in a namespace with a default memory or CPU limit assigned to the container, then the cluster admin can use a LimitRange to specify a default value for the memory limit for all containers. Note: If you want to find out how Supergiant further extends Kubernetes resource model with its cost-effective auto scaling algorithm, you will want to read this article. Tutorial In what follows, we are going to show how you can easily assign resource requests and limits to containers in a Pod using Kubernetes native tools. To complete this tutorial, you'll need the following prerequisites: A running Kubernetes cluster. See Supergiant GitHub wiki for the details on deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster locally using Minikube. A kubectl CLI installed and configured to communicate with the cluster. See how to install kubectl here. A Heapster service running in your cluster. Note: Supergiant clusters are deployed with Heapster service by default. You can verify whether Heapster is running with kubectl get services --namespace=kube-system, which should produce a similar output if you are running a cluster deployed with Supergiant: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 10.3.106.144 80/TCP 5d kube-dns ClusterIP 10.3.0.10 53/UDP,53/TCP 5d monitoring-influxdb ClusterIP 10.3.70.70 8083/TCP,8086/TCP 5d tiller-deploy ClusterIP 10.3.195.104 44134/TCP 5d We are now set and can proceed to assigning resources to our Kubernetes containers. Step 1: Create a new namespace Creating a new namespace is a good practice that helps isolate computing resources and Pods used in a project from the rest of the cluster. We can create a new namespace for this tutorial using the following command: kubectl create namespace assigning-resources-tut If it works out, the console will produce the following output: namespace "assigning-resources-tut" created Step 2: Specify CPU requests and limits For this tutorial we are creating a single-container Pod,  which is the most common type of Pods in Kubernetes. A container image used is NGINX web server accessed from the Docker Hub container repository. In order to specify a CPU and memory request for a given container, use spec.containers.resources.requests field in the Pod manifest. For resource limits, we use spec.containers.resources.limits field. We decided to set our resource request for the NGINX container at 0.5 CPU (or 500 millicpus) and a CPU limit at 1 CPU. Correspondingly, we have a memory request of 500 MiB and a memory limit of 700 MiB for this container. Here’s the configuration file for the Pod: apiVersion: v1 # Might differ depending on the Kubernetes API version you use kind: Pod metadata: name: test-pod namespace: assigning-resources-tut spec: containers: - name: nginx image: nginx:latest resources: limits: cpu: "1" memory: "700Mi" requests: cpu: "0.5" memory: "500Mi" Step 3: Create a Pod To create a Pod, first save the configuration above in a file (e.g., test-pod.yaml), and run the following command. (Note: use your path to the file.) kubectl create -f test-pod.yaml --namespace=assigning-resources-tut Step 4: Check whether the pod's container is running in our namespace To accomplish this, use get pod command with the Pod's name and -- namespace argument set to "assigning-resources-tut" like this: kubectl get pod test-pod --namespace=assigning-resources-tut You should see the following output: NAME READY STATUS RESTARTS AGE test-pod 1/1 Running 0 8s It indicates that the test-pod is running with no restarts and has the age of 8 seconds. You can also check a detailed information about the Pod using the following command outputting the Pod's data in the YAML format: kubectl get pod test-pod --output=yaml --namespace=assigning-resources-tut Among other things, this output shows that the container is running with resource limits and requests we specified, so everything works as expected. spec: containers: - image: nginx:latest imagePullPolicy: Always name: nginx resources: limits: cpu: "1" memory: 700Mi requests: cpu: 500m memory: 500Mi Step 5: Checking the actual resource usage It's very convenient to be able to track the resources your Pod is actually consuming. We can use the Heapster service to check this. Heapster enables Cluster Monitoring and Performance Analysis for Kubernetes and is installed by default on clusters deployed with Supergiant. To use Heapster, we should first start a proxy: kubectl proxy The proxy will start in the current terminal window, so you should open another terminal to use Heapster. Now, to get a CPU usage rate run, in a new terminal window, run: curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/assigning-resources-tut/pods/test-pod/metrics/cpu/usage_rate As you see, we are curling the Kubernetes API cpu/usage_rate endpoint that refers to our namespace and the "test-pod" running in it. You should get output similar to this: { "metrics": [ { "timestamp": "2018-05-13T21:43:40Z", "value": 0.1 }, { "timestamp": "2018-05-13T21:44:40Z", "value": 0.2 }, { "timestamp": "2018-05-13T21:45:00Z", "value": 0.22 } It shows a series of timestamps and corresponding CPU usage values for each timestamp, which helps us track the CPU usage dynamics over time. Similarly, to get a memory usage rate, we can run a command with a memory/usage endpoint: curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/assigning-resources-tut/pods/test-pod/metrics/memory/usage The output below indicates that the Pod is using 13246454 of RAM. { "timestamp": "2018-05-13T22:00:40Z", "value": 13246464 }, { "timestamp": "2018-05-13T22:01:00Z", "value": 13246464 } Step 6: Deleting the pod Now, as our tutorial is over, let's clean up the cluster by deleting the Pod. kubectl delete pod test-pod --namespace=assigning-resources-tut That's it! You've learned how to assign resources to containers in a Pod. It's as simple as that! Managing Resource Requests and Limits with Supergiant Supergiant is a very flexible system that combines an easy-to-use UI for cluster and resources deployment with the access to the low-level Kubernetes APIs and tools. For example, once you've deployed a Kube with Supergiant, you can use kubectl on the master the same way as described in this tutorial. However, what if you don't have time to learn kubectl and low-level Kubernetes API but still want to set resources limits and requests for containers in your Pod? Supergiant solves this problem for you by providing access to ~160 Helm charts in the /stable Kubernetes Helm repository. This repository stores curated and well-tested Helm charts, which dramatically simplifies deployment of apps in Kubernetes (e.g see a partial view of Helm charts in the image below). Unfortunately, not all of these charts expose the ability to set resource requests and limits. However, if they do, you can use Supergiant's Dashboard to configure them. One of the charts that offer such functionality is Sumokube, a hosted logging platform by Sumologic. To find this app, click "Apps" in the main navigation menu and then "Deploy New App". Enter "Sumokube" in the search field and select the chart to open its configuration. As you see, the chart contains default values for resource requests and limits which can be easily changed to match your needs. After all edits are made, just click "Submit" and the app will be deployed on your cluster within a minute or so. As we've mentioned, not all helm charts expose resource requests and limits. However, you can always add your custom Helm repositories to Supergiant or use native Kubernetes tools like kubectl discussed above. Conclusion We hope that this article shed some light on how the Kubernetes resource model actually works. As we've learned, the concepts of resource requests and limits are quite simple yet powerful. Using them reasonably can help you control how your Pods consume computing resources in the cluster. Kubernetes offers you a flexibility to define high-priority and low-priority Pods distributing limited resources in the most efficient way, ensuring that the most critical applications and containers are always running notwithstanding unexpected traffic spikes and various node-level failures. [Less]
Posted over 7 years ago by Kirill
Kubernetes (K8s) is an open source platform that automates deployment, scaling, and management of containerized applications, workloads, and services. The platform offers a set of abstraction layers for container management that ... [More] is agnostic about the underlying container technologies (e.g., Docker, Rkt) used for container packaging. Among other things, Kubernetes facilitates the efficient management of computing resources, deployment, scheduling, horizontal and vertical scaling, updating, security, and much more, through various abstractions. In this article, we discuss Kubernetes Pods as one of the central concepts in Kubernetes. We first focus on the Pod's architecture and functions, their basic use cases and benefits, and then proceed to the discussion of their deployment options. We discuss four broad options  -- using native Kubernetes deployments such as direct Pod creation, Deployment Controller, Replication Controller, and using Supergiant (our Kubernetes-as-a-Service platform that simplifies the deployment and management of K8s clusters and resources via a centralized UI that abstracts away Kubernetes API). By the end of this tutorial, you will have a better understanding of available options for Pods deployment in Kubernetes and beyond. What is Pod? A Pod is a unit of deployment in Kubernetes that provides a set of abstractions and services for applications running on the Kubernetes cluster. It can include one or multiple containers (e.g., Docker, Rkt) that share storage and network resources, Linux cgroups and namespaces, are co-scheduled/colocated, and share the same life cycle.  However, why would we use Pods at all instead of conventional containers that can also provide a good level of isolation? The answer is that a Pod's model offers higher-level abstractions, which easily allow plug-in Kubernetes services for containers and applications. Pods augment the container model by automatically handling co-scheduling, coordinated replication, resource sharing, dependency management, and shared fate of applications running in a Pod. Thus, Pods can be imagined as "logical hosts" that contain relatively tightly coupled application containers and consume Kubernetes orchestration services to manage them. Pod's Shared Resources and Communication To achieve higher levels of isolation, containers in a Pod have shared contexts that include a set of Linux cgroups and namespaces and Kubernetes native facets of isolation. Also, similarly to containers, Pods may have shared volumes that can be accessed by applications within a Pod. These volumes can be defined in a Pod spec and mounted into each container's filesystem. In addition, Pods are created with internal communication mechanisms. Each Pod is created with a unique IP address and port space shared by all containers running in it. Containers can communicate via localhost but due to a shared network namespace (IP and ports) should also coordinate the usage of ports to avoid conflicts. In their turn, Pods can communicate using their IP addresses in a flat shared network like Flannel with the Pod's hostname set to the Pod's Name. Uses of Pods The most basic usage of Pods are Pods running a single container where a Pod serves as a wrapper around a single container (e.g., Docker container) and Kubernetes manages Pod services to containers rather than containers directly.  Advanced usage of Pods includes Pods running tightly coupled multiple containers. In this scenario, a Pod is a wrapper around multiple co-located containers that share resources and have distinct responsibilities. For example, one could imagine a Pod encapsulating two containers, one of which acts as a static server for files and the second serves as a 'sidecar' container executing operations with these files (e.g., update and transformation). These two approaches enable a number of use cases for Pods including the following: hosting vertically integrated application stacks (e.g., MEAN and LAMP) that include a number of tightly coupled applications content management systems (CMS), file loaders and local cache managers log shippers, backup, compression, snapshotting monitoring adapters, event publishes, data change watchers, etc. network tools like proxies, bridges, and adapters Pods Life Cycle Pods are created and deployed with a unique ID (UID) and scheduled to Nodes where they live until their termination or deletion. (Note: Pods die simultaneously with Nodes on which they live.) It is noteworthy that a Pod is not re-scheduled to a new node after termination. Rather, Kubernetes creates an identical Pod with the same name if needed but with a new UID. When a Pod dies, related shared volumes are also detached. A Pod's life cycle includes a number of phases that are defined in PodStatus object. Possible values for phase include the following: Pending: Pods with a pending status have been already accepted by the system, but one or several container images have not been yet downloaded or installed. Running: The Pod has been scheduled to a specific node, and all containers are already running. Succeeded: All Containers in the Pod were successfully terminated and will not be restarted. Failed: At least one container in the Pod was terminated with a failure. This means that one of the containers in the Pod either exited with a non-zero status or was terminated by the system. Unknown: The state of the Pod cannot be obtained for some reason, typically due to communication error. Benefits of Pods In addition to better isolation and access to Kubernetes orchestration services, Pods offer a number of other important advantages in comparison to running multiple programs in a single (Docker) container: Transparency: Thanks to Pods, containers are visible to infrastructure and OS, enabling the provision of various services such as process management and resource monitoring. Decoupling software dependencies: Running a single container for each Pod allows independent versioning, deployment, and upgrading of containers that make up an application. Simplicity: Users don't need to use their own process managers to manage signal and exit code propagation. Efficiency: Thanks to the delegation of infrastructure services to the system, containers can be more lightweight with Pods. Pluggability: Running containers in Pods allows plugin Kubernetes schedulers and controllers. High-Availability Applications: Pods can be replaced in advance of their termination and deletion, ensuring high availability of your applications. Deploying Pods Kubernetes provides several options for creating and managing Pods: direct creation of Pods via Pod templates using Deployment Controller using Replication Controller using a Kubernetes-as-a-Service provider like Supergiant Prerequisites  To try these options, you'll need to put several prerequisites in place:  A running Kubernetes cluster: If you don't have a Kubernetes cluster yet, you can run a local single-node Kubernetes cluster with Minikube or link a your cloud account to Supergiant and deploy a cluster on it.  Kubectl command line tool: You can find instructions for installing kubectl here.  Direct Pod Creation Using Pod Templates In most cases, you don't need to create Pods directly. (Note: Deployments are the recommended way to create Pods in Kubernetes.) However, manual creation of Pods may be useful for development and testing purposes. To deploy a Pod directly, you first must define a Pod template, which is a Pod specification describing the Pod's runtime, container images used, and other application-specific settings (e.g., ports, proxies). Users can create Pod templates using YAML or JSON syntax. YAML is used in the example below. apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] As you see, this Pod template does not specify a desired state of the Pod including a number of replicas to bring up. Therefore, if the template changes, it won't affect the running Pods. This approach radically simplifies the platform's semantics and increases the flexibility of deployments. Creating a Pod from Scratch In this example, we show you the whole process of creating a Pod from a Pod Template. Step 1. Define a New Pod using a Pod Template Create a Pod template for the popular Redis data structure store retrieved from Docker Hub container repository. Save this template in the redis-pod.yaml file for later usage. apiVersion: v1 kind: Pod metadata: name: redis spec: containers: - image: redis:latest name: redis ports: - containerPort: 6379 restartPolicy: Never This Pod template specifies the following important settings: apiVersion : Kubernetes API version used. spec.containers.image : Redis container image to be downloaded from the Docker Hub (we are downloading the latest release). spec.containers.ports.containerPort : a port assigned to the Redis Pod. restartPolicy : A restart policy for the Pod. Available options include Always, OnFailure, and Never. In this example, we are asking Kubernetes to never restart a Pod if it fails. Step 2. Create a Pod Once our Pod Template is edited and saved, we can use kubectl CLI to create the Pod. (See instructions on how to install kubectl on your Kubernetes master.) $ kubectl create -f redis-pod.yaml // Use your path to redis-pod.yaml pod "redis" created Step 3. Check the Pod We can now see the updated list of running Pods using the following command: kubectl get pods The console's output shows the name of the Pod, its status, a number of restarts, and the Pod's age. NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 14s Deleting the Pod Pods can be deleted using the following command, where the optional parameter --grace-period= allows users to override the default grace period (30 seconds). kubectl delete pod redis --grace-period=20 For force deletion, set --grace-period to 0 and specify an additional flag -- force. This should work in kubectl versions >=1.5. Limitations of Direct Pod Creation Creating individual Pods directly in Kubernetes is quite rare because Pods are designed to be relatively ephemeral entities. Since Pods do not self-heal, a Pod created manually will not be restarted if a Node fails or if the scheduling operation fails. Similarly, a Pod won't be recreated after an eviction due to the shortage of compute resources or Node maintenance. As a result, Pods created directly can be easily lost and need to be created from scratch again. Therefore, the best way to create Pods using native Kubernetes tools is to use Controllers. A Controller is a Kubernetes object that can create and manage multiple Pods enabling replication, rollout, and offering self-healing capabilities. For example, if one of the Nodes fails, the Controller might schedule the affected Pods on a different Node, thereby maintaining the desired cluster state. Creating Pods with a Replication Controller A ReplicationController maintains the desired number of Pods removing extra Pods and creating new ones if there are less than expected. In contrast to manually created Pods, the Pods created by a ReplicationController are automatically replaced upon failure. To create a new ReplicationController, we first define a template and save it to a new file titledhttpd-rc.yaml. In the example below, we are creating a ReplicationController that will bring up three replicas of Apache HTTP Server. apiVersion: v1 kind: ReplicationController metadata: name: httpd spec: replicas: 3 selector: app: httpd template: metadata: name: httpd labels: app: httpd spec: containers: - name: httpd image: httpd ports: - containerPort: 80 Then, run kubectl create to start Pods. $ kubectl create -f httpd-rc.yaml replicationcontroller "httpd" created We can now check on the status of the ReplicationController using the following command: $ kubectl describe replicationcontrollers/httpd Name: httpd Namespace: default Selector: app=httpd Labels: app=httpd Annotations: Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=httpd Containers: httpd: Image: httpd Port: 80/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 16s replication-controller Created pod: httpd-wzzp3 Normal SuccessfulCreate 16s replication-controller Created pod: httpd-vvs1v Normal SuccessfulCreate 16s replication-controller Created pod: httpd-q3fsb As you see, the ReplicationController has created three httpd Pods with a shared httpd label. The kubectl output above also displays information about namespaces and container images used, assigned ports, replicas' state, and IDs. Finally, to delete the ReplicationController run: $ kubectl delete replicationcontroller httpd replicationcontroller "httpd" deleted As you might have noticed, creating Pods with the ReplicationController is quite simple. ReplicationController, however, has certain limitations such as a necessity to create new replication controllers for app upgrades, dealing with switching between replication controllers manually, and reverting failed changes manually. Thus, using Deployment Controller is recommended if you want to create replicas while automating other operations like rolling updates, reverting failed changes, etc.  Creating Pods using Deployment Controller A Deployment Controller can be used to create new ReplicaSets and make declarative updates of existing Deployments. In the following example, we will define a Deployment that creates a ReplicaSet of three Apache HTTP Server Pods (httpd). The first thing we need to do is to create a Deployment object and save it to a new file - e.g httpd-deployment.yaml. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpd-deployment labels: app: httpd spec: replicas: 3 selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: httpd image: httpd:latest ports: - containerPort: 80 The Deployment object defined above does the following: Creates a Deployment named httpd-deployment indicated by the metadata.name field. The Deployment is created with three httpd replicas, specified in the spec.replicas field. The selector field defines how the Deployment Controller finds the right Pods to manage. In our example, we have created a label "httpd" shared by all Pods (spec.selector.matchLabels.app.httpd). The template.spec field specifies that the Pods run only one container named httpd, which uses the latest version of the httpd Docker Hub image. The Deployment opens port:80 for all Pods. Once all edits are made, we can create the Deployment: kubectl create -f httpd-deployment.yaml deployment "httpd-deployment" created We can then see the newly created Deployment and three running httpd replicas using kubectl get deployments. The output will be similar to the following: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE httpd-deployment 3 3 3 3 1m The output above contains the following information: NAME : the list of all Deployments in your Kubernetes cluster DESIRED : the number of replicas K8s wants to see running CURRENT : displays how many replicas K8s does see running UP-TO-DATE : references how many of the currently running replicas fit the desired state of the deployment (container image, labels, or other manifest field changes) AVAILABLE : displays how many currently running replicas have successfully passed their readiness probe AGE : refers specifically to the age of the deployment resource, and not the replicas/pods themselves We can also check the Deployment rollout status running kubectl rollout status deployment/httpd-deployment. If your Deployment has been successfully rolled out, you'll see the following output: deployment "httpd-deployment" successfully rolled out To see the ReplicaSet (rs) created by the deployment, run kubectl get rs: NAME DESIRED CURRENT READY AGE httpd-deployment-2955525241 3 3 3 13m ReplicaSet is formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE] where 2955525241 is the Pod's template hash value automatically generated upon the Deployment creation. We can also run kubectl get pods --show-labels to see the labels automatically generated for each Pod NAME READY STATUS RESTARTS AGE LABELS httpd-deployment-2955525241-27cj2 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 httpd-deployment-2955525241-kfjj2 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 httpd-deployment-2955525241-z6s5z 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 Deleting a Deployment If you want to delete the Deployment, simply run: $ kubectl delete deployment httpd-deployment deployment "httpd-deployment" deleted That's it! You can now create Pods with the Deployment Controller and use kubectl to check its status and replicas. It's as simple as that! Deploying Pods with Supergiant Supergiant is a Kubernetes-as-a-Service platform that simplifies deployment and management of Kubernetes clusters and resources. It provides an easy-to-use UI for application deployment and grants access to Helm repositories containing hundreds of popular applications. In a nutshell, a Helm repository is a collection of packages that contain common configuration for Kubernetes applications including an app's runtime, ports, dependencies, communication, and networking settings. Supergiant ships with the /stable branch of the official Kubernetes Charts repository that includes approximately 160 configured apps. These charts are well tested packages that comply with all technical requirements of Kubernetes. The process of deploying apps in Supergiant is quite simple. To deploy a new app from a given repository, click "Apps" in the main navigation menu and then "Deploy New App". Then, in the application list, select the app you wish to deploy and edit its configuration (see the deployment process in the GIF below). Each chart has its specific parameters and options that can be found in the official documentation for the chart. At the minimum, you should specify a cluster to which to deploy the app and create the user-friendly name for the deployment. After all edits are made, click the "Submit" button and watch your application being deployed at a fraction of the time. On successful deployment, the app's status will change to "Running" as displayed in the cluster stats. Adding Custom Repositories Supergiant allows adding custom repositories (both private and public), which means that you can have access to any Pods or applications you like. To add a new repository, select App & Helm Repositories under the Settings drop-down menu in the upper header. Put the name and the URL of a new repository in blank fields and click "Add new Repo". Supergiant will add a new repository to its memory and refresh the list of available apps in your apps' list.  For example, in the GIF above, we've added an official Kubernetes Charts incubator repository that includes apps that have not yet passed all requirements for /stable repository. Conclusion As we have seen, Pods are powerful Kubernetes abstractions that enable co-scheduling, replication, communication, updating, and other operations with tightly coupled containers. As a Kubernetes user, you have a wide array of options to deploy Pods to your cluster including direct Pod creation, using Deployment Controller and ReplicationController. Supergiant provides access to these native Kubernetes API components while also enabling easy deployments of Helm charts via repositories accessible in the easy-to-use Supergiant dashboard. Supergiant abstracts the deployment of Pods even further, making the task easier for developers not familiar with complex Kubernetes concepts. In the next tutorials, we'll dive deeper into other Kubernetes concepts and components, so stay tuned for upcoming content. [Less]
Posted over 7 years ago by Kirill
Kubernetes (K8s) is an open source platform that automates deployment, scaling, and management of containerized applications, workloads, and services. The platform offers a set of abstraction layers for container management that ... [More] is agnostic about the underlying container technologies (e.g., Docker, Rkt) used for container packaging. Among other things, Kubernetes facilitates the efficient management of computing resources, deployment, scheduling, horizontal and vertical scaling, updating, security, and much more, through various abstractions. In this article, we discuss Kubernetes Pods as one of the central concepts in Kubernetes. We first focus on the Pod's architecture and functions, their basic use cases and benefits, and then proceed to the discussion of their deployment options. We discuss four broad options  -- using native Kubernetes deployments such as direct Pod creation, Deployment Controller, Replication Controller, and using Supergiant (our Kubernetes-as-a-Service platform that simplifies the deployment and management of K8s clusters and resources via a centralized UI that abstracts away Kubernetes API). By the end of this tutorial, you will have a better understanding of available options for Pods deployment in Kubernetes and beyond. What is Pod? A Pod is a unit of deployment in Kubernetes that provides a set of abstractions and services for applications running on the Kubernetes cluster. It can include one or multiple containers (e.g., Docker, Rkt) that share storage and network resources, Linux cgroups and namespaces, are co-scheduled/colocated, and share the same life cycle.  However, why would we use Pods at all instead of conventional containers that can also provide a good level of isolation? The answer is that a Pod's model offers higher-level abstractions, which easily allow plug-in Kubernetes services for containers and applications. Pods augment the container model by automatically handling co-scheduling, coordinated replication, resource sharing, dependency management, and shared fate of applications running in a Pod. Thus, Pods can be imagined as "logical hosts" that contain relatively tightly coupled application containers and consume Kubernetes orchestration services to manage them. Pod's Shared Resources and Communication To achieve higher levels of isolation, containers in a Pod have shared contexts that include a set of Linux cgroups and namespaces and Kubernetes native facets of isolation. Also, similarly to containers, Pods may have shared volumes that can be accessed by applications within a Pod. These volumes can be defined in a Pod spec and mounted into each container's filesystem. In addition, Pods are created with internal communication mechanisms. Each Pod is created with a unique IP address and port space shared by all containers running in it. Containers can communicate via localhost but due to a shared network namespace (IP and ports) should also coordinate the usage of ports to avoid conflicts. In their turn, Pods can communicate using their IP addresses in a flat shared network like Flannel with the Pod's hostname set to the Pod's Name. Uses of Pods The most basic usage of Pods are Pods running a single container where a Pod serves as a wrapper around a single container (e.g., Docker container) and Kubernetes manages Pod services to containers rather than containers directly.  Advanced usage of Pods includes Pods running tightly coupled multiple containers. In this scenario, a Pod is a wrapper around multiple co-located containers that share resources and have distinct responsibilities. For example, one could imagine a Pod encapsulating two containers, one of which acts as a static server for files and the second serves as a 'sidecar' container executing operations with these files (e.g., update and transformation). These two approaches enable a number of use cases for Pods including the following: hosting vertically integrated application stacks (e.g., MEAN and LAMP) that include a number of tightly coupled applications content management systems (CMS), file loaders and local cache managers log shippers, backup, compression, snapshotting monitoring adapters, event publishes, data change watchers, etc. network tools like proxies, bridges, and adapters Pods Life Cycle Pods are created and deployed with a unique ID (UID) and scheduled to Nodes where they live until their termination or deletion. (Note: Pods die simultaneously with Nodes on which they live.) It is noteworthy that a Pod is not re-scheduled to a new node after termination. Rather, Kubernetes creates an identical Pod with the same name if needed but with a new UID. When a Pod dies, related shared volumes are also detached. A Pod's life cycle includes a number of phases that are defined in PodStatus object. Possible values for phase include the following: Pending: Pods with a pending status have been already accepted by the system, but one or several container images have not been yet downloaded or installed. Running: The Pod has been scheduled to a specific node, and all containers are already running. Succeeded: All Containers in the Pod were successfully terminated and will not be restarted. Failed: At least one container in the Pod was terminated with a failure. This means that one of the containers in the Pod either exited with a non-zero status or was terminated by the system. Unknown: The state of the Pod cannot be obtained for some reason, typically due to communication error. Benefits of Pods In addition to better isolation and access to Kubernetes orchestration services, Pods offer a number of other important advantages in comparison to running multiple programs in a single (Docker) container: Transparency: Thanks to Pods, containers are visible to infrastructure and OS, enabling the provision of various services such as process management and resource monitoring. Decoupling software dependencies: Running a single container for each Pod allows independent versioning, deployment, and upgrading of containers that make up an application. Simplicity: Users don't need to use their own process managers to manage signal and exit code propagation. Efficiency: Thanks to the delegation of infrastructure services to the system, containers can be more lightweight with Pods. Pluggability: Running containers in Pods allows plugin Kubernetes schedulers and controllers. High-Availability Applications: Pods can be replaced in advance of their termination and deletion, ensuring high availability of your applications. Deploying Pods Kubernetes provides several options for creating and managing Pods: direct creation of Pods via Pod templates using Deployment Controller using Replication Controller using a Kubernetes-as-a-Service provider like Supergiant Prerequisites  To try these options, you'll need to put several prerequisites in place:  A running Kubernetes cluster: If you don't have a Kubernetes cluster yet, you can run a local single-node Kubernetes cluster with Minikube or link a your cloud account to Supergiant and deploy a cluster on it.  Kubectl command line tool: You can find instructions for installing kubectl here.  Direct Pod Creation Using Pod Templates In most cases, you don't need to create Pods directly. (Note: Deployments are the recommended way to create Pods in Kubernetes.) However, manual creation of Pods may be useful for development and testing purposes. To deploy a Pod directly, you first must define a Pod template, which is a Pod specification describing the Pod's runtime, container images used, and other application-specific settings (e.g., ports, proxies). Users can create Pod templates using YAML or JSON syntax. YAML is used in the example below. apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] As you see, this Pod template does not specify a desired state of the Pod including a number of replicas to bring up. Therefore, if the template changes, it won't affect the running Pods. This approach radically simplifies the platform's semantics and increases the flexibility of deployments. Creating a Pod from Scratch In this example, we show you the whole process of creating a Pod from a Pod Template. Step 1. Define a New Pod using a Pod Template Create a Pod template for the popular Redis data structure store retrieved from Docker Hub container repository. Save this template in the redis-pod.yaml file for later usage. apiVersion: v1 kind: Pod metadata: name: redis spec: containers: - image: redis:latest name: redis ports: - containerPort: 6379 restartPolicy: Never This Pod template specifies the following important settings: apiVersion : Kubernetes API version used. spec.containers.image : Redis container image to be downloaded from the Docker Hub (we are downloading the latest release). spec.containers.ports.containerPort : a port assigned to the Redis Pod. restartPolicy : A restart policy for the Pod. Available options include Always, OnFailure, and Never. In this example, we are asking Kubernetes to never restart a Pod if it fails. Step 2. Create a Pod Once our Pod Template is edited and saved, we can use kubectl CLI to create the Pod. (See instructions on how to install kubectl on your Kubernetes master.) $ kubectl create -f redis-pod.yaml // Use your path to redis-pod.yaml pod "redis" created Step 3. Check the Pod We can now see the updated list of running Pods using the following command: kubectl get pods The console's output shows the name of the Pod, its status, a number of restarts, and the Pod's age. NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 14s Deleting the Pod Pods can be deleted using the following command, where the optional parameter --grace-period= allows users to override the default grace period (30 seconds). kubectl delete pod redis --grace-period=20 For force deletion, set --grace-period to 0 and specify an additional flag -- force. This should work in kubectl versions >=1.5. Limitations of Direct Pod Creation Creating individual Pods directly in Kubernetes is quite rare because Pods are designed to be relatively ephemeral entities. Since Pods do not self-heal, a Pod created manually will not be restarted if a Node fails or if the scheduling operation fails. Similarly, a Pod won't be recreated after an eviction due to the shortage of compute resources or Node maintenance. As a result, Pods created directly can be easily lost and need to be created from scratch again. Therefore, the best way to create Pods using native Kubernetes tools is to use Controllers. A Controller is a Kubernetes object that can create and manage multiple Pods enabling replication, rollout, and offering self-healing capabilities. For example, if one of the Nodes fails, the Controller might schedule the affected Pods on a different Node, thereby maintaining the desired cluster state. Creating Pods with a Replication Controller A ReplicationController maintains the desired number of Pods removing extra Pods and creating new ones if there are less than expected. In contrast to manually created Pods, the Pods created by a ReplicationController are automatically replaced upon failure. To create a new ReplicationController, we first define a template and save it to a new file titledhttpd-rc.yaml. In the example below, we are creating a ReplicationController that will bring up three replicas of Apache HTTP Server. apiVersion: v1 kind: ReplicationController metadata: name: httpd spec: replicas: 3 selector: app: httpd template: metadata: name: httpd labels: app: httpd spec: containers: - name: httpd image: httpd ports: - containerPort: 80 Then, run kubectl create to start Pods. $ kubectl create -f httpd-rc.yaml replicationcontroller "httpd" created We can now check on the status of the ReplicationController using the following command: $ kubectl describe replicationcontrollers/httpd Name: httpd Namespace: default Selector: app=httpd Labels: app=httpd Annotations: Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=httpd Containers: httpd: Image: httpd Port: 80/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 16s replication-controller Created pod: httpd-wzzp3 Normal SuccessfulCreate 16s replication-controller Created pod: httpd-vvs1v Normal SuccessfulCreate 16s replication-controller Created pod: httpd-q3fsb As you see, the ReplicationController has created three httpd Pods with a shared httpd label. The kubectl output above also displays information about namespaces and container images used, assigned ports, replicas' state, and IDs. Finally, to delete the ReplicationController run: $ kubectl delete replicationcontroller httpd replicationcontroller "httpd" deleted As you might have noticed, creating Pods with the ReplicationController is quite simple. ReplicationController, however, has certain limitations such as a necessity to create new replication controllers for app upgrades, dealing with switching between replication controllers manually, and reverting failed changes manually. Thus, using Deployment Controller is recommended if you want to create replicas while automating other operations like rolling updates, reverting failed changes, etc.  Creating Pods using Deployment Controller A Deployment Controller can be used to create new ReplicaSets and make declarative updates of existing Deployments. In the following example, we will define a Deployment that creates a ReplicaSet of three Apache HTTP Server Pods (httpd). The first thing we need to do is to create a Deployment object and save it to a new file - e.g httpd-deployment.yaml. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpd-deployment labels: app: httpd spec: replicas: 3 selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: httpd image: httpd:latest ports: - containerPort: 80 The Deployment object defined above does the following: Creates a Deployment named httpd-deployment indicated by the metadata.name field. The Deployment is created with three httpd replicas, specified in the spec.replicas field. The selector field defines how the Deployment Controller finds the right Pods to manage. In our example, we have created a label "httpd" shared by all Pods (spec.selector.matchLabels.app.httpd). The template.spec field specifies that the Pods run only one container named httpd, which uses the latest version of the httpd Docker Hub image. The Deployment opens port:80 for all Pods. Once all edits are made, we can create the Deployment: kubectl create -f httpd-deployment.yaml deployment "httpd-deployment" created We can then see the newly created Deployment and three running httpd replicas using kubectl get deployments. The output will be similar to the following: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE httpd-deployment 3 3 3 3 1m The output above contains the following information: NAME : the list of all Deployments in your Kubernetes cluster DESIRED : the number of replicas K8s wants to see running CURRENT : displays how many replicas K8s does see running UP-TO-DATE : references how many of the currently running replicas fit the desired state of the deployment (container image, labels, or other manifest field changes) AVAILABLE : displays how many currently running replicas have successfully passed their readiness probe AGE : refers specifically to the age of the deployment resource, and not the replicas/pods themselves We can also check the Deployment rollout status running kubectl rollout status deployment/httpd-deployment. If your Deployment has been successfully rolled out, you'll see the following output: deployment "httpd-deployment" successfully rolled out To see the ReplicaSet (rs) created by the deployment, run kubectl get rs: NAME DESIRED CURRENT READY AGE httpd-deployment-2955525241 3 3 3 13m ReplicaSet is formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE] where 2955525241 is the Pod's template hash value automatically generated upon the Deployment creation. We can also run kubectl get pods --show-labels to see the labels automatically generated for each Pod NAME READY STATUS RESTARTS AGE LABELS httpd-deployment-2955525241-27cj2 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 httpd-deployment-2955525241-kfjj2 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 httpd-deployment-2955525241-z6s5z 1/1 Running 0 15m app=httpd,pod-template-hash=2955525241 Deleting a Deployment If you want to delete the Deployment, simply run: $ kubectl delete deployment httpd-deployment deployment "httpd-deployment" deleted That's it! You can now create Pods with the Deployment Controller and use kubectl to check its status and replicas. It's as simple as that! Deploying Pods with Supergiant Supergiant is a Kubernetes-as-a-Service platform that simplifies deployment and management of Kubernetes clusters and resources. It provides an easy-to-use UI for application deployment and grants access to Helm repositories containing hundreds of popular applications. In a nutshell, a Helm repository is a collection of packages that contain common configuration for Kubernetes applications including an app's runtime, ports, dependencies, communication, and networking settings. Supergiant ships with the /stable branch of the official Kubernetes Charts repository that includes approximately 160 configured apps. These charts are well tested packages that comply with all technical requirements of Kubernetes. The process of deploying apps in Supergiant is quite simple. To deploy a new app from a given repository, click "Apps" in the main navigation menu and then "Deploy New App". Then, in the application list, select the app you wish to deploy and edit its configuration (see the deployment process in the GIF below). Each chart has its specific parameters and options that can be found in the official documentation for the chart. At the minimum, you should specify a cluster to which to deploy the app and create the user-friendly name for the deployment. After all edits are made, click the "Submit" button and watch your application being deployed at a fraction of the time. On successful deployment, the app's status will change to "Running" as displayed in the cluster stats. Adding Custom Repositories Supergiant allows adding custom repositories (both private and public), which means that you can have access to any Pods or applications you like. To add a new repository, select App & Helm Repositories under the Settings drop-down menu in the upper header. Put the name and the URL of a new repository in blank fields and click "Add new Repo". Supergiant will add a new repository to its memory and refresh the list of available apps in your apps' list.  For example, in the GIF above, we've added an official Kubernetes Charts incubator repository that includes apps that have not yet passed all requirements for /stable repository. Conclusion As we have seen, Pods are powerful Kubernetes abstractions that enable co-scheduling, replication, communication, updating, and other operations with tightly coupled containers. As a Kubernetes user, you have a wide array of options to deploy Pods to your cluster including direct Pod creation, using Deployment Controller and ReplicationController. Supergiant provides access to these native Kubernetes API components while also enabling easy deployments of Helm charts via repositories accessible in the easy-to-use Supergiant dashboard. Supergiant abstracts the deployment of Pods even further, making the task easier for developers not familiar with complex Kubernetes concepts. In the next tutorials, we'll dive deeper into other Kubernetes concepts and components, so stay tuned for upcoming content. [Less]
Posted over 7 years ago by Kirill
Day 3 of the KubeCon in Copenhagen is well underway, and Kubernetes project evolution is at the top of the agenda. The mood of the third conference day was set by Aparna Sinha, Group Product Manager for Google Inc., who spoke ... [More] about Kubernetes project updates. According to Aparna, Kubernetes is rapidly becoming one of the most popular and in-demand open source software projects. It is already second to Linux in popularity on GitHub, and 54% of enterprises are using Kubernetes at least in some way. As Kubernetes matures, the community shifts attention from cool features to cluster and cloud security, and the tangible presence of security startups participating in this conference confirms this trend. For many users, Kubernetes 1.7 was a milestone release that added security, support for stateful applications, and extensibility features. Kubernetes hardened its security with new encrypted secrets that allow encrypting etcd key-value store used by Kubernetes internally. Progress has been also made in threat detection, with better integration of various cloud native threat detection tools like Aqua, Capsules, Sysdig, Twistlock -- all hosted by the CNCF. Kubernetes developers have also addressed enterprises’ concern with the potential security risks associated with the broad access of containers to system resources that might compromise the host security. On May 2, 2018, just in time for the conference, Google open sourced its gVisor tool for the development of sandboxed containers. This new container architecture offers better isolation of containers by intercepting application system calls and acting as a guest kernel running in the user space. gVisor open sourcing satisfies the enterprise appetite for running heterogenous and less trusted workloads in Kubernetes clusters. In addition to these security updates, new Kubernetes versions introduce automatic updates of stateful applications like databases and enable a number of performance improvements. It is clear that Kubernetes project updates are leading the community to more secure, fast, and efficient usage of Kubernetes clusters. These changes can attract more companies working with sensitive workloads to adopt the Kubernetes infrastructure. One of the case studies of such adoption was presented by Sarah Wells, Technical Director for Operations and Reliability for Financial Times. This British media giant has successfully handled the transfer of over 150 microservices to Kubernetes recently, dramatically improving deployment speed and reducing infrastructure costs. The discussions on these and other topics are underway in panels and sessions of KubeCon after which all attendees are going to the All Attendee Party in one of the most beautiful places of Copenhagen, the Tivoli Gardens. The Supergiant team is joining the party to briefly enjoy the event but will be soon back at the conference. If you are close to the venue, come visit our official KubeCon booth (SU-C17) and stay for updates about the last date of the conference tomorrow. [Less]
Posted over 7 years ago by Kirill
Day 3 of the KubeCon in Copenhagen is well underway, and Kubernetes project evolution is at the top of the agenda. The mood of the third conference day was set by Aparna Sinha, Group Product Manager for Google Inc., who spoke ... [More] about Kubernetes project updates. According to Aparna, Kubernetes is rapidly becoming one of the most popular and in-demand open source software projects. It is already second to Linux in popularity on GitHub, and 54% of enterprises are using Kubernetes at least in some way. As Kubernetes matures, the community shifts attention from cool features to cluster and cloud security, and the tangible presence of security startups participating in this conference confirms this trend. For many users, Kubernetes 1.7 was a milestone release that added security, support for stateful applications, and extensibility features. Kubernetes hardened its security with new encrypted secrets that allow encrypting etcd key-value store used by Kubernetes internally. Progress has been also made in threat detection, with better integration of various cloud native threat detection tools like Aqua, Capsules, Sysdig, Twistlock -- all hosted by the CNCF. Kubernetes developers have also addressed enterprises’ concern with the potential security risks associated with the broad access of containers to system resources that might compromise the host security. On May 2, 2018, just in time for the conference, Google open sourced its gVisor tool for the development of sandboxed containers. This new container architecture offers better isolation of containers by intercepting application system calls and acting as a guest kernel running in the user space. gVisor open sourcing satisfies the enterprise appetite for running heterogenous and less trusted workloads in Kubernetes clusters. In addition to these security updates, new Kubernetes versions introduce automatic updates of stateful applications like databases and enable a number of performance improvements. It is clear that Kubernetes project updates are leading the community to more secure, fast, and efficient usage of Kubernetes clusters. These changes can attract more companies working with sensitive workloads to adopt the Kubernetes infrastructure. One of the case studies of such adoption was presented by Sarah Wells, Technical Director for Operations and Reliability for Financial Times. This British media giant has successfully handled the transfer of over 150 microservices to Kubernetes recently, dramatically improving deployment speed and reducing infrastructure costs. The discussions on these and other topics are underway in panels and sessions of KubeCon after which all attendees are going to the All Attendee Party in one of the most beautiful places of Copenhagen, the Tivoli Gardens. The Supergiant team is joining the party to briefly enjoy the event but will be soon back at the conference. If you are close to the venue, come visit our official KubeCon booth (SU-C17) and stay for updates about the last date of the conference tomorrow. [Less]
Posted over 7 years ago by Kirill
Over 90 IT companies, about 4,000 attendees, top-notch organization and staff -- all under one very large roof: that’s KubeCon Europe 2018. This year the annual KubeCon Europe is taking place in Copenhagen, Denmark, at the ... [More] high-tech Bella Center. As always, the venue is gathering the companies heralding cloud-native applications, containerization, and Kubernetes. Dan Kohn, the Executive Director of the Cloud Native Computing Foundation (CNCF) that organizes the event, noticed that Kubernetes and the cloud-native approach are gaining traction in the tech industry becoming a de facto standard for running applications at scale. 4,000 thousand attendees (compared to just several hundred at the first KubeCon in 2016 in San Francisco), Kubernetes integration by major cloud providers, and the growth of the CNCF project participants (e.g., DigitalOcean became a Gold member in 2017) reflect the wider adoption of the Kubernetes and cloud native applications by the community. The cloud native ecosystem, however, is still evolving as the conference sessions show. Today’s hot topics are Kubernetes security, cloud native storage, and programming languages (e.g., Ballerina), service meshes, container runtimes, effective CI/CD with Kubernetes, and serverless architectures. Machine Learning at scale was another trendy theme centered around Kubeflow and other projects for cloud native ML. Kubernetes and cloud-native platforms are gradually incorporating all sorts of technology stacks that make up today’s world of computing, aligning them with microservices architecture and containerization and addressing emerging security and networking challenges. As cloud-native solutions are becoming even more ubiquitous, we are likely to witness the Kubernetes transformation into a core, low-level feature of any software development process. According to Alexis Richardson, CNCF TOC Chair and WeaveWorks CEO, we are just several steps away from asking Kubernetes to run our code directly even without containers. Sounds radical? Right, since KubeCon continues to be the place where new ideas and projects are born and take shape. As one of the sponsors of the conference, Supergiant plays an important part in this process. We invite you to visit Supergiant’s official KubeCon booth (booth #SU-C17) if you are close to the venue. In any case,  stay tuned to the upcoming reports about KubeCon events in Copenhagen to learn more. [Less]
Posted over 7 years ago by Kirill
Over 90 IT companies, about 4,000 attendees, top-notch organization and staff -- all under one very large roof: that’s KubeCon Europe 2018. This year the annual KubeCon Europe is taking place in Copenhagen, Denmark, at the ... [More] high-tech Bella Center. As always, the venue is gathering the companies heralding cloud-native applications, containerization, and Kubernetes. Dan Kohn, the Executive Director of the Cloud Native Computing Foundation (CNCF) that organizes the event, noticed that Kubernetes and the cloud-native approach are gaining traction in the tech industry becoming a de facto standard for running applications at scale. 4,000 thousand attendees (compared to just several hundred at the first KubeCon in 2016 in San Francisco), Kubernetes integration by major cloud providers, and the growth of the CNCF project participants (e.g., DigitalOcean became a Gold member in 2017) reflect the wider adoption of the Kubernetes and cloud native applications by the community. The cloud native ecosystem, however, is still evolving as the conference sessions show. Today’s hot topics are Kubernetes security, cloud native storage, and programming languages (e.g., Ballerina), service meshes, container runtimes, effective CI/CD with Kubernetes, and serverless architectures. Machine Learning at scale was another trendy theme centered around Kubeflow and other projects for cloud native ML. Kubernetes and cloud-native platforms are gradually incorporating all sorts of technology stacks that make up today’s world of computing, aligning them with microservices architecture and containerization and addressing emerging security and networking challenges. As cloud-native solutions are becoming even more ubiquitous, we are likely to witness the Kubernetes transformation into a core, low-level feature of any software development process. According to Alexis Richardson, CNCF TOC Chair and WeaveWorks CEO, we are just several steps away from asking Kubernetes to run our code directly even without containers. Sounds radical? Right, since KubeCon continues to be the place where new ideas and projects are born and take shape. As one of the sponsors of the conference, Supergiant plays an important part in this process. We invite you to visit Supergiant’s official KubeCon booth (booth #SU-C17) if you are close to the venue. In any case,  stay tuned to the upcoming reports about KubeCon events in Copenhagen to learn more. [Less]