|
Posted
over 8 years
ago
by
adam
In late March, Mike Johnston of Supergiant presented "Zombie Kubernetes! Making Nodes Rise from the Dead" at KubeCon EU 2017 in Berlin. He focused on how to install Kubernetes using immutable configuration and also covered
... [More]
removing points of provisioning failure by leveraging cloud-config for configuration.
[slideshare id="74462168"]
About Supergiant
Supergiant runs on top of Kubernetes and ensures that things are easier for engineering and support teams to configure. It doesn't obscure Kubernetes in any way: it simply brings deep Kubernetes features to the surface and makes them easy to manage.
Supergiant’s packing algorithm also augments Kubernetes' built-in resource allocation by automatically handling instance provisioning and allocation for you. For example, if you are running large instances for your cluster, it does not make sense to start another large server for just one customer. The packing algorithm will automatically choose the correct servers that meet the specific needs of your environment.
We hope you have enjoyed this article. Check us out at supergiant.io, or chat with us about Supergiant in the comments or in our shiny Slack channel. Contribute your thoughts, code, or issues at the Supergiant Github.
Further Reading
Kubernetes Series: Understanding Why Container Architecture Is Important to the Future of your Business
Top Reasons Businesses Should Move to Kubernetes Now
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How Qbox Saved 5 Figures A Month With Supergiant
Supergiant Tutorials
[Less]
|
|
Posted
over 8 years
ago
by
adam
The team behind Qbox.io and Supergiant.io will join other companies at the ATX Startup Crawl at SXSW to introduce their hosted elasticsearch product and their open-source container orchestration system on March 13 from 5PM -
... [More]
10PM.
Join us as the Startup Crawl showcases the best in talent, startups, and more that Austin’s technology community has to offer. Last year had more than 90 companies and 12,000+ registered crawlers. This year is going to be just as awesome.
Supergiant is the first production-grade container orchestration system that makes it easy to manage auto-scaling, clustered, stateful datastores.CLICK HERE TO SIGN UP FOR THE CRAWLSupergiant began in 2015 when the team at Qbox.io needed a production-ready, scalable solution for their Hosted Elasticsearch Service. After massive initial internal success, it was refined to easily launch and manage any containerized application. Supergiant solves many huge problems for developers who want to use scalable, distributed databases in a containerized environment.
Come chat with us and pick up some swag.
Built on top of Kubernetes, Supergiant exposes many underlying key features and makes them easy to use as top-layer abstractions. A Supergiant installation can be up and running in minutes, and users can take advantage of automated server management/capacity control, auto-scaling, shareable load balancers, volume management, extensible deployments, and resource monitoring through an effortless user interface.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, and Open Stack, and GCE. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
|
Posted
over 8 years
ago
by
adam
The team behind Qbox.io and Supergiant.io will join other companies at the ATX Startup Crawl at SXSW to introduce their hosted elasticsearch product and their open-source container orchestration system on March 13 from 5PM -
... [More]
10PM.
Join us as the Startup Crawl showcases the best in talent, startups, and more that Austin’s technology community has to offer. Last year had more than 90 companies and 12,000+ registered crawlers. This year is going to be just as awesome.
Supergiant is the first production-grade container orchestration system that makes it easy to manage auto-scaling, clustered, stateful datastores.CLICK HERE TO SIGN UP FOR THE CRAWLSupergiant began in 2015 when the team at Qbox.io needed a production-ready, scalable solution for their Hosted Elasticsearch Service. After massive initial internal success, it was refined to easily launch and manage any containerized application. Supergiant solves many huge problems for developers who want to use scalable, distributed databases in a containerized environment.
Come chat with us and pick up some swag.
Built on top of Kubernetes, Supergiant exposes many underlying key features and makes them easy to use as top-layer abstractions. A Supergiant installation can be up and running in minutes, and users can take advantage of automated server management/capacity control, auto-scaling, shareable load balancers, volume management, extensible deployments, and resource monitoring through an effortless user interface.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, and Open Stack, and GCE. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
|
Posted
over 8 years
ago
by
adam
The team behind Qbox.io and Supergiant.io will join other companies at the ATX Startup Crawl at SXSW to introduce their hosted elasticsearch product and their open-source container orchestration system on March 13 from 5PM -
... [More]
10PM.
Join us as the Startup Crawl showcases the best in talent, startups, and more that Austin’s technology community has to offer. Last year had more than 90 companies and 12,000+ registered crawlers. This year is going to be just as awesome.
Supergiant is the first production-grade container orchestration system that makes it easy to manage auto-scaling, clustered, stateful datastores.CLICK HERE TO SIGN UP FOR THE CRAWLSupergiant began in 2015 when the team at Qbox.io needed a production-ready, scalable solution for their Hosted Elasticsearch Service. After massive initial internal success, it was refined to easily launch and manage any containerized application. Supergiant solves many huge problems for developers who want to use scalable, distributed databases in a containerized environment.
Come chat with us and pick up some swag.
Built on top of Kubernetes, Supergiant exposes many underlying key features and makes them easy to use as top-layer abstractions. A Supergiant installation can be up and running in minutes, and users can take advantage of automated server management/capacity control, auto-scaling, shareable load balancers, volume management, extensible deployments, and resource monitoring through an effortless user interface.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, and Open Stack, and GCE. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
|
Posted
almost 9 years
ago
by
Vineeth
In this blog post, we will compare traditional web application architecture with the emerging microservice architecture that is all the rage these days.
We will explain why microservice architecture is much preferred over
... [More]
traditional architecture for web applications and why it’s such a great idea to build microservices with Supergiant.
Old and Monolithic: Traditional Architecture
Traditional web application architecture consists of three major components:
User interface: The first and foremost thing that communicates with the user.
Database: Stored stateful application data.
Server-side application: The living monolith of our architecture; where calls from the UI are processed, data is retrieved and manipulated, processed data is sent back to the user, and new data is created and sent to our database.
In this kind of architecture, the server-side application acts as a single large structure. If we need to introduce a significant change or a new feature, we would include changes in the server-side application.
This architecture served as a good practice before the age of the cloud, but since the advent of the cloud, scaling these types of applications has become problematic.
The scaling issue arises when there are many instances of the application running and we need to make a change. Whether such a change is small or big, incorporating it into the entire architecture means the individual changes need to take place in all instances of the application, so they require bigger change cycle times.
New and Spiffy: Microservice Architecture
What are microservices?
Advantages of microservices architecture over traditional architecture.
Scaling using microservices.
The popularity of this architecture has skyrocketed among developers for a number of reasons, such as platforms moving to cloud, along with the increasing number of devices other than normal computing devices (read: IoT) coming into picture for a single application.
There are no strict definitions or guidelines to tell us how a microservice application should be designed or architectured, but it’s agreed across the industry that good microservice architecture involves applications built as several small components, which could interact with each other independently and without failing.
When each component service is independently deployable and scalable, this architecture gains significant advantage over the traditional type because an entity that requires change is independent and does not affect other services. Therefore, not only is the the deployment of changes simpler, but also the scaling of a particular service is greatly simplified and easier to control. Traditionally, scaling had to be done with all of the components inclusively -- or at least it required additional effort to scale only the needed components.
Here’s an illustration that depicts the difference between the scaling methodologies for a traditional monolithic service and a typical microservice for the same application.
Using this illustration, we note that the microservices architecture scales only those service that need scaling, whereas with traditional monolithic architecture, the entire application has to be scaled.
Another advantage of microservices architecture is that its methodology generally favors business model development instead of the older concept of specific component-based development. For example, if a feature like "add to cart" had a change to be made, the initial model required the entire team to get in line from the UI team, to the DB team, etc., and be focused as things gone wrong, which could break other things, etc.
In the microservices model, updates are much simpler, and the chance of unintended breakage is much lower because everything runs independently.
Microservice-based architectures are heavily decentralized because they focus on the successful reusability of individual components. Many applications use pre-written microservice libraries for quick deployment and might also modify those according to their use case, making development much faster.
Even Spiffier: Microservice Architecture on Supergiant
Introduction to containers concept
Kubernetes: the container management system
What is Supergiant Architecture?
Difference between Supergiant and Kubernetes
With the advent of cloud, developers have naturally drifted toward microservices when designing applications, thus the growing acceptance of container-based app deployment systems. With the adoption of microservices architecture, the need to separate or rather isolate the services from one another has become a priority, and this can be achieved easily with containerization.
Containers are self-contained application execution environments that have their own memory, CPU, etc. They enable more efficient, more fluid utilization of system resources than virtual machines do because VMs have a additional weight in the OS they carry.
As containers have become popular, many container management systems have come to fill conceptual and operational gaps. Since the container ecosystem lives only inside the kernel, what if we need to use multiple containers and multiple kernels? When one container is ready to scale, how will the host hardware be allocated and optimized? Such scalability questions were answered very efficiently by the arrival of Kubernetes, the cloud-scale orchestration platform for containers.
Kubernetes manages clusters of hardware for use by multiple containers. The result is better container performance without unwanted resource waste.
Now comes Supergiant, which is built on top of Kubernetes. Supergiant transforms Kubernetes to a more efficient cluster deployment platform by equipping admins with easy configuration options, automatic load balancer management, and tighter resource allocation controls for maximum efficiency.
To keep our illustration simple, let’s look at how a traditional web application would look on Supergiant. Here we have two main components: the web-app component and the database component.
Why Supergiant Is Perfect for Microservice Architecture
Supergiant can sense the resource utilization of each component and separately scale resources for individual components or hardware topology for the whole application cloud.
Supergiant allows deployments, changes in individual services, etc. to be managed by simple command line tools.
As a microservice architecture grows, the optimizations performed by Supergiant dramatically benefit the performance of the entire system. Also, this architecture results in better microservice abstraction.Learn more about Kubernetes, which Supergiant is built on top of.
Imagine an application comprised of many services. We create a good microservice architecture by deploying each service as an independent container. When it comes time to scale hardware for our application cloud, Supergiant handles it for us, and our microservices remain resilient.
The Advantages Become Real when Things Change
Until this point, everything is fine, and it seems nothing is complicated. But what if we had to do a significant change in one of the services of our application? In a normal container cluster, this has to be done by individual tracking and replacing the particular service. This involves the same steps to be replicated along all the installed nodes, which is a tedious and time-consuming task.
Supergiant saves devops time because it can propagate such a change to every node in our cluster with just one single command. This timesaving affects not only the individual service changes/replaces but also security, logging, any other individual operation, or even the application as a whole.
Thanks for your time. Check us out at supergiant.io, Twitter, Reddit, slack, and Github!
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
|
Posted
almost 9 years
ago
by
Vineeth
In this blog post, we will compare traditional web application architecture with the emerging microservice architecture that is all the rage these days.
We will explain why microservice architecture is much preferred over
... [More]
traditional architecture for web applications and why it’s such a great idea to build microservices with Supergiant.
Old and Monolithic: Traditional Architecture
Traditional web application architecture consists of three major components:
User interface: The first and foremost thing that communicates with the user.
Database: Stored stateful application data.
Server-side application: The living monolith of our architecture; where calls from the UI are processed, data is retrieved and manipulated, processed data is sent back to the user, and new data is created and sent to our database.
In this kind of architecture, the server-side application acts as a single large structure. If we need to introduce a significant change or a new feature, we would include changes in the server-side application.
This architecture served as a good practice before the age of the cloud, but since the advent of the cloud, scaling these types of applications has become problematic.
The scaling issue arises when there are many instances of the application running and we need to make a change. Whether such a change is small or big, incorporating it into the entire architecture means the individual changes need to take place in all instances of the application, so they require bigger change cycle times.
New and Spiffy: Microservice Architecture
What are microservices?
Advantages of microservices architecture over traditional architecture.
Scaling using microservices.
The popularity of this architecture has skyrocketed among developers for a number of reasons, such as platforms moving to cloud, along with the increasing number of devices other than normal computing devices (read: IoT) coming into picture for a single application.
There are no strict definitions or guidelines to tell us how a microservice application should be designed or architectured, but it’s agreed across the industry that good microservice architecture involves applications built as several small components, which could interact with each other independently and without failing.
When each component service is independently deployable and scalable, this architecture gains significant advantage over the traditional type because an entity that requires change is independent and does not affect other services. Therefore, not only is the the deployment of changes simpler, but also the scaling of a particular service is greatly simplified and easier to control. Traditionally, scaling had to be done with all of the components inclusively -- or at least it required additional effort to scale only the needed components.
Here’s an illustration that depicts the difference between the scaling methodologies for a traditional monolithic service and a typical microservice for the same application.
Using this illustration, we note that the microservices architecture scales only those service that need scaling, whereas with traditional monolithic architecture, the entire application has to be scaled.
Another advantage of microservices architecture is that its methodology generally favors business model development instead of the older concept of specific component-based development. For example, if a feature like "add to cart" had a change to be made, the initial model required the entire team to get in line from the UI team, to the DB team, etc., and be focused as things gone wrong, which could break other things, etc.
In the microservices model, updates are much simpler, and the chance of unintended breakage is much lower because everything runs independently.
Microservice-based architectures are heavily decentralized because they focus on the successful reusability of individual components. Many applications use pre-written microservice libraries for quick deployment and might also modify those according to their use case, making development much faster.
Even Spiffier: Microservice Architecture on Supergiant
Introduction to containers concept
Kubernetes: the container management system
What is Supergiant Architecture?
Difference between Supergiant and Kubernetes
With the advent of cloud, developers have naturally drifted toward microservices when designing applications, thus the growing acceptance of container-based app deployment systems. With the adoption of microservices architecture, the need to separate or rather isolate the services from one another has become a priority, and this can be achieved easily with containerization.
Containers are self-contained application execution environments that have their own memory, CPU, etc. They enable more efficient, more fluid utilization of system resources than virtual machines do because VMs have a additional weight in the OS they carry.
As containers have become popular, many container management systems have come to fill conceptual and operational gaps. Since the container ecosystem lives only inside the kernel, what if we need to use multiple containers and multiple kernels? When one container is ready to scale, how will the host hardware be allocated and optimized? Such scalability questions were answered very efficiently by the arrival of Kubernetes, the cloud-scale orchestration platform for containers.
Kubernetes manages clusters of hardware for use by multiple containers. The result is better container performance without unwanted resource waste.
Now comes Supergiant, which is built on top of Kubernetes. Supergiant transforms Kubernetes to a more efficient cluster deployment platform by equipping admins with easy configuration options, automatic load balancer management, and tighter resource allocation controls for maximum efficiency.
To keep our illustration simple, let’s look at how a traditional web application would look on Supergiant. Here we have two main components: the web-app component and the database component.
Why Supergiant Is Perfect for Microservice Architecture
Supergiant can sense the resource utilization of each component and separately scale resources for individual components or hardware topology for the whole application cloud.
Supergiant allows deployments, changes in individual services, etc. to be managed by simple command line tools.
As a microservice architecture grows, the optimizations performed by Supergiant dramatically benefit the performance of the entire system. Also, this architecture results in better microservice abstraction.
Imagine an application comprised of many services. We create a good microservice architecture by deploying each service as an independent container. When it comes time to scale hardware for our application cloud, Supergiant handles it for us, and our microservices remain resilient.
The Advantages Become Real when Things Change
Until this point, everything is fine, and it seems nothing is complicated. But what if we had to do a significant change in one of the services of our application? In a normal container cluster, this has to be done by individual tracking and replacing the particular service. This involves the same steps to be replicated along all the installed nodes, which is a tedious and time-consuming task.
Supergiant saves devops time because it can propagate such a change to every node in our cluster with just one single command. This timesaving affects not only the individual service changes/replaces but also security, logging, any other individual operation, or even the application as a whole.
Thanks for your time. Check us out at supergiant.io, Twitter, Reddit, slack, and Github!
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
|
Posted
almost 9 years
ago
by
Vineeth
In this blog post, we will compare traditional web application architecture with the emerging microservice architecture that is all the rage these days.
We will explain why microservice architecture is much preferred over
... [More]
traditional architecture for web applications and why it’s such a great idea to build microservices with Supergiant.
Old and Monolithic: Traditional Architecture
Traditional web application architecture consists of three major components:
User interface: The first and foremost thing that communicates with the user.
Database: Stored stateful application data.
Server-side application: The living monolith of our architecture; where calls from the UI are processed, data is retrieved and manipulated, processed data is sent back to the user, and new data is created and sent to our database.
In this kind of architecture, the server-side application acts as a single large structure. If we need to introduce a significant change or a new feature, we would include changes in the server-side application.
This architecture served as a good practice before the age of the cloud, but since the advent of the cloud, scaling these types of applications has become problematic.
The scaling issue arises when there are many instances of the application running and we need to make a change. Whether such a change is small or big, incorporating it into the entire architecture means the individual changes need to take place in all instances of the application, so they require bigger change cycle times.
New and Spiffy: Microservice Architecture
What are microservices?
Advantages of microservices architecture over traditional architecture.
Scaling using microservices.
The popularity of this architecture has skyrocketed among developers for a number of reasons, such as platforms moving to cloud, along with the increasing number of devices other than normal computing devices (read: IoT) coming into picture for a single application.
There are no strict definitions or guidelines to tell us how a microservice application should be designed or architectured, but it’s agreed across the industry that good microservice architecture involves applications built as several small components, which could interact with each other independently and without failing.
When each component service is independently deployable and scalable, this architecture gains significant advantage over the traditional type because an entity that requires change is independent and does not affect other services. Therefore, not only is the the deployment of changes simpler, but also the scaling of a particular service is greatly simplified and easier to control. Traditionally, scaling had to be done with all of the components inclusively -- or at least it required additional effort to scale only the needed components.
Here’s an illustration that depicts the difference between the scaling methodologies for a traditional monolithic service and a typical microservice for the same application.
Using this illustration, we note that the microservices architecture scales only those service that need scaling, whereas with traditional monolithic architecture, the entire application has to be scaled.
Another advantage of microservices architecture is that its methodology generally favors business model development instead of the older concept of specific component-based development. For example, if a feature like "add to cart" had a change to be made, the initial model required the entire team to get in line from the UI team, to the DB team, etc., and be focused as things gone wrong, which could break other things, etc.
In the microservices model, updates are much simpler, and the chance of unintended breakage is much lower because everything runs independently.
Microservice-based architectures are heavily decentralized because they focus on the successful reusability of individual components. Many applications use pre-written microservice libraries for quick deployment and might also modify those according to their use case, making development much faster.
Even Spiffier: Microservice Architecture on Supergiant
Introduction to containers concept
Kubernetes: the container management system
What is Supergiant Architecture?
Difference between Supergiant and Kubernetes
With the advent of cloud, developers have naturally drifted toward microservices when designing applications, thus the growing acceptance of container-based app deployment systems. With the adoption of microservices architecture, the need to separate or rather isolate the services from one another has become a priority, and this can be achieved easily with containerization.
Containers are self-contained application execution environments that have their own memory, CPU, etc. They enable more efficient, more fluid utilization of system resources than virtual machines do because VMs have a additional weight in the OS they carry.
As containers have become popular, many container management systems have come to fill conceptual and operational gaps. Since the container ecosystem lives only inside the kernel, what if we need to use multiple containers and multiple kernels? When one container is ready to scale, how will the host hardware be allocated and optimized? Such scalability questions were answered very efficiently by the arrival of Kubernetes, the cloud-scale orchestration platform for containers.
Kubernetes manages clusters of hardware for use by multiple containers. The result is better container performance without unwanted resource waste.
Now comes Supergiant, which is built on top of Kubernetes. Supergiant transforms Kubernetes to a more efficient cluster deployment platform by equipping admins with easy configuration options, automatic load balancer management, and tighter resource allocation controls for maximum efficiency.
To keep our illustration simple, let’s look at how a traditional web application would look on Supergiant. Here we have two main components: the web-app component and the database component.
Why Supergiant Is Perfect for Microservice Architecture
Supergiant can sense the resource utilization of each component and separately scale resources for individual components or hardware topology for the whole application cloud.
Supergiant allows deployments, changes in individual services, etc. to be managed by simple command line tools.
As a microservice architecture grows, the optimizations performed by Supergiant dramatically benefit the performance of the entire system. Also, this architecture results in better microservice abstraction.Learn more about Kubernetes, which Supergiant is built on top of.
Imagine an application comprised of many services. We create a good microservice architecture by deploying each service as an independent container. When it comes time to scale hardware for our application cloud, Supergiant handles it for us, and our microservices remain resilient.
The Advantages Become Real when Things Change
Until this point, everything is fine, and it seems nothing is complicated. But what if we had to do a significant change in one of the services of our application? In a normal container cluster, this has to be done by individual tracking and replacing the particular service. This involves the same steps to be replicated along all the installed nodes, which is a tedious and time-consuming task.
Supergiant saves devops time because it can propagate such a change to every node in our cluster with just one single command. This timesaving affects not only the individual service changes/replaces but also security, logging, any other individual operation, or even the application as a whole.
Thanks for your time. Check us out at supergiant.io, Twitter, Reddit, slack, and Github!
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
|
Posted
almost 9 years
ago
by
Vineeth
In this blog post, we will compare traditional web application architecture with the emerging microservice architecture that is all the rage these days.
We will explain why microservice architecture is much preferred over
... [More]
traditional architecture for web applications and why it’s such a great idea to build microservices with Supergiant.
Old and Monolithic: Traditional Architecture
Traditional web application architecture consists of three major components:
User interface: The first and foremost thing that communicates with the user.
Database: Stored stateful application data.
Server-side application: The living monolith of our architecture; where calls from the UI are processed, data is retrieved and manipulated, processed data is sent back to the user, and new data is created and sent to our database.
In this kind of architecture, the server-side application acts as a single large structure. If we need to introduce a significant change or a new feature, we would include changes in the server-side application.
This architecture served as a good practice before the age of the cloud, but since the advent of the cloud, scaling these types of applications has become problematic.
The scaling issue arises when there are many instances of the application running and we need to make a change. Whether such a change is small or big, incorporating it into the entire architecture means the individual changes need to take place in all instances of the application, so they require bigger change cycle times.
New and Spiffy: Microservice Architecture
What are microservices?
Advantages of microservices architecture over traditional architecture.
Scaling using microservices.
The popularity of this architecture has skyrocketed among developers for a number of reasons, such as platforms moving to cloud, along with the increasing number of devices other than normal computing devices (read: IoT) coming into picture for a single application.
There are no strict definitions or guidelines to tell us how a microservice application should be designed or architectured, but it’s agreed across the industry that good microservice architecture involves applications built as several small components, which could interact with each other independently and without failing.
When each component service is independently deployable and scalable, this architecture gains significant advantage over the traditional type because an entity that requires change is independent and does not affect other services. Therefore, not only is the the deployment of changes simpler, but also the scaling of a particular service is greatly simplified and easier to control. Traditionally, scaling had to be done with all of the components inclusively -- or at least it required additional effort to scale only the needed components.
Here’s an illustration that depicts the difference between the scaling methodologies for a traditional monolithic service and a typical microservice for the same application.
Using this illustration, we note that the microservices architecture scales only those service that need scaling, whereas with traditional monolithic architecture, the entire application has to be scaled.
Another advantage of microservices architecture is that its methodology generally favors business model development instead of the older concept of specific component-based development. For example, if a feature like "add to cart" had a change to be made, the initial model required the entire team to get in line from the UI team, to the DB team, etc., and be focused as things gone wrong, which could break other things, etc.
In the microservices model, updates are much simpler, and the chance of unintended breakage is much lower because everything runs independently.
Microservice-based architectures are heavily decentralized because they focus on the successful reusability of individual components. Many applications use pre-written microservice libraries for quick deployment and might also modify those according to their use case, making development much faster.
Even Spiffier: Microservice Architecture on Supergiant
Introduction to containers concept
Kubernetes: the container management system
What is Supergiant Architecture?
Difference between Supergiant and Kubernetes
With the advent of cloud, developers have naturally drifted toward microservices when designing applications, thus the growing acceptance of container-based app deployment systems. With the adoption of microservices architecture, the need to separate or rather isolate the services from one another has become a priority, and this can be achieved easily with containerization.
Containers are self-contained application execution environments that have their own memory, CPU, etc. They enable more efficient, more fluid utilization of system resources than virtual machines do because VMs have a additional weight in the OS they carry.
As containers have become popular, many container management systems have come to fill conceptual and operational gaps. Since the container ecosystem lives only inside the kernel, what if we need to use multiple containers and multiple kernels? When one container is ready to scale, how will the host hardware be allocated and optimized? Such scalability questions were answered very efficiently by the arrival of Kubernetes, the cloud-scale orchestration platform for containers.
Kubernetes manages clusters of hardware for use by multiple containers. The result is better container performance without unwanted resource waste.
Now comes Supergiant, which is built on top of Kubernetes. Supergiant transforms Kubernetes to a more efficient cluster deployment platform by equipping admins with easy configuration options, automatic load balancer management, and tighter resource allocation controls for maximum efficiency.
To keep our illustration simple, let’s look at how a traditional web application would look on Supergiant. Here we have two main components: the web-app component and the database component.
Why Supergiant Is Perfect for Microservice Architecture
Supergiant can sense the resource utilization of each component and separately scale resources for individual components or hardware topology for the whole application cloud.
Supergiant allows deployments, changes in individual services, etc. to be managed by simple command line tools.
As a microservice architecture grows, the optimizations performed by Supergiant dramatically benefit the performance of the entire system. Also, this architecture results in better microservice abstraction.Learn more about Kubernetes, which Supergiant is built on top of.
Imagine an application comprised of many services. We create a good microservice architecture by deploying each service as an independent container. When it comes time to scale hardware for our application cloud, Supergiant handles it for us, and our microservices remain resilient.
The Advantages Become Real when Things Change
Until this point, everything is fine, and it seems nothing is complicated. But what if we had to do a significant change in one of the services of our application? In a normal container cluster, this has to be done by individual tracking and replacing the particular service. This involves the same steps to be replicated along all the installed nodes, which is a tedious and time-consuming task.
Supergiant saves devops time because it can propagate such a change to every node in our cluster with just one single command. This timesaving affects not only the individual service changes/replaces but also security, logging, any other individual operation, or even the application as a whole.
Thanks for your time. Check us out at supergiant.io, Twitter, Reddit, slack, and Github!
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
|
Posted
almost 9 years
ago
by
brian
By the time this tutorial is over, you will have used Supergiant to deploy an auto-scaling Kubernetes cluster on DigitalOcean.
This will keep hardware costs as low as possible while providing auto-scaling performance when it is
... [More]
needed.
For this tutorial, you will need:
A running Supergiant installation
DigitalOcean API Access Token with Read and Write permissions
Your SSH key fingerprint from the DIgitalOcean Dashboard > Click profile avatar > Settings > Security
A region that supports block storage where you want to deploy your Kubernetes cluster. As of this writing, it should be one of FRA1, NYC1, or SFO1.
Note: This is an article in a series to help get Kubernetes running quickly on DigitalOcean using Supergiant. If a Supergiant server is not yet running, see: How to Run a Supergiant Server on DigitalOcean
Kubernetes on DigitalOcean Hardware that Auto-Scales for You
We’re excited to release a Supergiant provider layer for DigitalOcean. DigitalOcean has emerged as an incredibly fast-growing and inexpensive cloud hardware provider that has had great success standing up to behemoths in their space. Not only are their droplet prices incredibly reasonable, but we’ve also found their provisioning performance faster than most other providers out there, making them an ideal host for auto-scaling Kubernetes.
Let’s get started.
Step 1: Login to the Supergiant Dashboard
To log in to Supergiant, you will need an admin username and password. You either created this on Supergiant first run (see installation tutorial), or you will need to be given a username and password from your Supergiant administrator.
Step 2: Add Cloud Credentials
Click to visit the Cloud Accounts view, and click to create a new entry with your DigitalOcean API access token.
Your credentials will need DigitalOcean API v2 read and write permissions.
Enter the following and click Create.
{
"credentials": {
"token": "YOUR APIv2 ACCESS TOKEN"
},
"name": "WHATEVER CLEVER NAME YOU LIKE",
"provider": "digitalocean"
}
After the cloud account is created, remember the name you’ve given it, because you will use it in the next step. You can always look it up again in the Cloud Accounts dashboard view.
Step 3: Create a Kube
Click to go to the Kubes view, and click to create a Kube.
Some values will be pre-populated for you. Edit whatever you like, but be sure to edit or review the following entries:
"cloud_account_name": the name of the Cloud Account you created above
"region": the region you would like to launch your Kubernetes cluster. Must be a region that supports DigitalOcean's Block Storage. Example: “nyc1”
"ssh_key_fingerprint": the fingerprint of the public key that will be authorized to access the Kubernetes API. Retrieve this from DIgitalOcean Dashboard > Click profile avatar > Settings > Security
"master_node_size": the instance size for your Kubernetes master; the value you enter here will depend on what options are available to your Supergiant config JSON file; 512MB of RAM is sufficient.
"name": a name for your Kube that starts with a letter, limited to 12 characters, made of lowercase letters, numbers, and/or dashes (`-`)
"node_sizes": build an array of DigitalOcean node sizes chosen from sizes contained in the Supergiant config/config.json file. This list will be used to provision hardware for auto-scaling Kubernetes.
My configuration looks like this:
{
"cloud_account_name": "digitalocean",
"digitalocean_config": {
"region": "nyc1",
"ssh_key_fingerprint": "MY SSH KEY FINGERPRINT"
},
"master_node_size": "512mb",
"name": "tutkube",
"node_sizes": [
"512mb",
"1gb",
"2gb",
"4gb",
"8gb",
"16gb",
"32gb",
"48gb",
"64gb"
]
}
When your edits are complete, click Create.
Step 4: Watch Supergiant Go!
Supergiant will orchestrate provisioning of DigitalOcean services, including block storage, security settings, networking, and more. When it is finished, it will provision your master and minion instances and will display log output as it goes.
The longest wait time will be “waiting for Kubernetes” as you wait for the DigitalOcean instance to be provisioned. Just be patient; DigitalOcean is working very hard for you.
Access Kubernetes
Once the process completes, you may click on the Kube ID to get details. To verify Kubernetes is running, get the username, password, and master_public_ip from the Kube details in the Supergiant dashboard.
From your terminal, enter the following:
curl --insecure https://USERNAME:PASSWORD@MASTER_PUBLIC_IP/api/v1/
You should see a JSON response describing the API. Congratulations -- you’ve just launched an auto-scaling Kubernetes cluster on DigitalOcean!
Teardown
If you want to tear down your tutorial Kube, from the Supergiant dashboard, simply click to view Kubes, select the one you wish to delete, and then select Delete from the Actions menu. It’s as simple as that. Supergiant will clean up after itself and remove all the associated DigitalOcean services unique to the Kube.
Access the Supergiant Community
Remember, if you have trouble, or if you want to talk to other users about how they're making the most of Supergiant, our community hangs out on the Supergiant public Slack channel.
We would love any feedback you want to leave. We're working hard to add features that our community wants most, so all your comments are helpful to us. Visit the Supergiant Slack channel, and post your questions and comments.
Where To Next?
This tutorial is one in a series to help you get started with Kubernetes using Supergiant. From here we recommend you check out the following:
Supergiant.io Tutorials Index
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
Top Reasons Businesses Should Move to Kubernetes Now
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
[Less]
|
|
Posted
almost 9 years
ago
by
brian
By the end of this tutorial, you will have started Supergiant as a service on DigitalOcean.
Supergiant and Kubernetes are open source and free forever. However, DigitalOcean hardware usage rates apply.
For this tutorial, you will
... [More]
need:
A paid DigitalOcean account
A public/private key pair handy for accessing your DigitalOcean server via SSH
Prepare for Launch
Log in to your DigitalOcean account dashboard view. From there, DigitalOcean makes it very easy to provision cloud hardware.
Step 1: Create and Connect to Your Droplet
From your dashboard, click Create Droplet to get started. Start by choosing an image.
Click Create Droplet and select the Ubuntu image. I’ve selected 16.04, which is the long-term support release.
You will only need the 512MB / 1CPU size to run Supergiant as a service.
About block storage and region selection: You don’t need to select block storage for this server. The default server storage will be enough, and Supergiant will automatically provision block storage for your Kubernetes cluster later. However, if you want to run your Supergiant server in the same datacenter region as your Kubernetes cluster, select a region that allows block storage, like FRA1, NYC1, or SFO1.
Add your public key, so you can access the server’s terminal via SSH later. If you have already added SSH keys to your DigitalOcean account, be sure the key you want to use is checked.
Give your server a unique name and click Create.
Your Droplet will be up and running in minutes. Once it’s up, SSH into it using the public IP you retrieve from the dashboard.
Retrieve the IP Address of the Droplet you created.
On DigitalOcean Ubuntu 16.04, SSH using the username root and the keypair you added to the server above.
ssh -i /path/to/private_key [email protected]
Step 2: Get Supergiant
Always check the Supergiant latest release for the current server binary URL. As of this tutorial, the latest release is v0.13.2. Execute the following, which will download the Supergiant v0.13.2 server binary to your /usr/sbin/ directory:
curl https://github.com/supergiant/... -L -o /usr/bin/supergiant
Be sure to make the binary executable.
chmod +x /usr/bin/supergiant
Step 3: Configure Supergiant
We will grab the example configuration file from github and put it in a responsible place. We don’t recommend using these default settings in production, but the configuration will work as is for quick start and testing purposes.
Execute the following to download the example configuration file:
sudo curl https://raw.githubusercontent.... --create-dirs -o /etc/supergiant/config.json
Use your favorite text editor to change the following configuration parameters in config.json to fit our Ubuntu environment:
{
...
"sqlite_file": "/var/lib/supergiant/development.db",
...
"log_file": "/var/log/supergiant/development.log",
...
}
Create the directories we just referenced so Supergiant can write its DB and log files:
sudo mkdir /var/lib/supergiant && sudo mkdir /var/log/supergiant
Next configure systemd to start Supergiant as a service. Use your favorite editor to create the following init file at /etc/systemd/system/supergiant.service:
[Unit]
Description=Supergiant Server
After=syslog.target
After=network.target
[Service]
ExecStart=/usr/bin/supergiant --config-file /etc/supergiant/config.json
Restart=on-abort
[Install]
WantedBy=multi-user.target
Step 4: Create an Admin User
The first time Supergiant runs, an admin user will be created, and the service will output the username and password to the console.
Execute the following in the terminal to test your configuration and create your admin user:
sudo supergiant --config-file /etc/supergiant/config.json
You will see the terminal output something like the following:
Save the randomly generated password, so you can access the dashboard later. Type Control+C to stop the server after the user is created.
If you have any trouble by this point, please join us on our Slack channel.
Step 5: Start Your Supergiant Server
Start your new Supergiant system service like so:
sudo systemctl start supergiant.service
The system shouldn't produce any output if this is successful.
To check the status of the service, you may execute the following:
sudo systemctl status supergiant.service
Last, enable the service to start on bootup:
sudo systemctl enable supergiant.service
Access your Supergiant Dashboard
By default in development mode, Supergiant serves HTTP over port 8080 and Ubuntu’s simple firewall is off.
To access the dashboard, simply get the IP of our server from your DigitalOcean droplets list, put the IP, followed by port number:8080 in a web browser to access your Supergiant dashboard on its default port (or whatever port you may have customized in your config.json file above), then use the username "admin" and the password generated above to log in.
That’s it! You’ve successfully created a Supergiant server that runs as a service for development purposes!
Access the Supergiant Community
Remember, if you have trouble, or if you want to talk to other users about how they're making the most of Supergiant, our community hangs out on the Supergiant public Slack channel.
We would love any feedback you want to leave. We're working hard to add features that our community wants most, so all your comments are helpful to us. Visit the Supergiant Slack channel, and post your questions and comments.
Whereto Next?
This tutorial is one in a series to help you get started with Kubernetes using Supergiant. From here we recommend you check out the following:
Auto-Scaling Kubernetes on DigitalOcean with Supergiant
Auto-Scaling Kubernetes Cluster on AWS EC2 with Supergiant
Auto-Scaling Kubernetes Cluster with OpenStack, Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|