Posted
about 7 years
ago
by
Kirill
In this tutorial, we'll guide you through the process of installing Supergiant and autoscaling your Kubernetes cluster on Google Compute Engine. Your cloud hardware will autoscale according to the parameters you set to keep costs
... [More]
as low as possible. When you launch a Kube with Supergiant, the system will immediately use Supergiant’s packing algorithm specifically designed to automatically handle the work of scaling up and scaling down for you. Sounds promising? Yes, it is.
For this tutorial, you will need:
A GCE account and a running GCE Virtual Machine (VM) instance. You will need only the instances with 512MB/1CPU size to run Supergiant as a service. When creating a Kubernetes master, Supergiant internally defaults to n1-standard-1. For this tutorial, we've selected Ubuntu 16.04 Xenial as the OS to be run on the n1-standard-1 instance.
A GCP Service account with full administrator access and a secret key for your instance. Each VM instance is mounted with the default service account. However, you'll need to create a cloud service secret key manually if it does not yet exist.
SSH Public Keys for Supergiant to securely access your GCE instances
Installing Supergiant on your GCE Instance
Step 1. Connect to your Instance
To install Supergiant, you first need to connect to your GCE instance using Google Cloud Platform console or gcloud console tool. Consult a simple guide provided by the Google Cloud for the steps to do this.
Step 2. Get the Latest Supergiant Release
After you have connected to the instance, get the latest stable release of Supergiant:
curl https://github.com/supergiant/supergiant/releases/download/v0.15.6/supergiant-server-linux-amd64 -L -o /usr/bin/supergiant
The command above downloads the latest stable release of Supergiant from GitHub and saves it to /user/bin/supergiant folder of your GCE instance.
Make sure to make the downloaded binary executable.
sudo chmod +x /usr/bin/supergiant
Step 3. Configure Supergiant
Next, configure your Supergiant service. To do this, grab the example configuration from Supergiant's GitHub repository. (We don’t recommend using these default settings in production, but the configuration will work as-is for quick-start and testing purposes.)
Execute the following command to download the configuration file:
sudo curl https://raw.githubusercontent.com/supergiant/supergiant/master/config/config.json.example --create-dirs -o /etc/supergiant/config.json
Then use your favorite text editor to change the log and database file locations in config.json to fit your Ubuntu environment.
{
...
"sqlite_file": "/var/lib/supergiant/development.db",
...
"log_file": "/var/log/supergiant/development.log",
...
}
You can see that the configuration file also contains other useful features like Supergiant public host, port, and default parameters of GCE instances for your future Kubes.
"publish_host": "localhost",
"http_port": "8080",
"gce": [
{"name": "n1-standard-1", "ram_gib": 3.75, "cpu_cores": 1},
{"name": "n1-standard-2", "ram_gib": 7.5, "cpu_cores": 2},
{"name": "n1-standard-4", "ram_gib": 15, "cpu_cores": 4},
{"name": "n1-standard-8", "ram_gib": 30, "cpu_cores": 8}
]
Next, create the directories we just referenced so Supergiant can write its DB and log files:
sudo mkdir /var/lib/supergiant && sudo mkdir /var/log/supergiant
To start Supergiant as a service, you also need to configure Ubuntu systemd. Use your favorite editor to create the following init file in /etc/systemd/system/supergiant.service .
[Unit]
Description=Supergiant Server
After=syslog.target
After=network.target
[Service]
ExecStart=/usr/bin/supergiant --config-file /etc/supergiant/config.json
Restart=on-abort
[Install]
WantedBy=multi-user.target
Step 4. Create an Admin User
The first time Supergiant runs, the system will create an admin user and output the username and password to the console. Execute the following command in the terminal to test your configuration and create your admin user:
sudo supergiant --config-file /etc/supergiant/config.json
Then save the randomly generated password to access the Supergiant dashboard later. Type Control + C to stop the server after the user is created.
Step 5. Start your Supergiant Server
Now you are ready to start your new Supergiant system service like this:
sudo systemctl start supergiant.service
If everything is fine, the system shouldn't produce any output. To check the status of the service, you may also execute the following:
sudo systemctl status supergiant.service
It's a good idea to enable the service to start on boot-up:
sudo systemctl enable supergiant.service
Step 6. Access your Supergiant Dashboard on GCE
Supergiant serves HTTP over port 8080 by default. You need to verify that your GCE instance has http traffic enabled and 8080 port opened. If not, add new Firewall rules for port:8080 under "VPC network/Firewall rules" in your GCP console. After you verify that all needed ports are opened, simply get the public IP of your instance from GCE instances list, and put the IP followed by the port number: 8080 to access your Supergiant dashboard on its default port (or whatever port you have specified in your config.json file). After the dashboard has downloaded, put in "admin" and the "password" generated above to log in.
That’s it! You’ve successfully created a Supergiant server that runs as a service for development purposes! Check out the view of your Supergiant dashboard below.
Step 7. Link a GCE Cloud Account
You next need to create a new GCE cloud account by adding GCE cloud credentials, so click ''Cloud Accounts", then "New", and select GCE from the dropdown list. You'll see the following JSON template with placeholders for cloud credentials.
{
"credentials": {
"auth_provider_x509_cert_url": "",
"auth_uri": "",
"client_email": "",
"client_id": "",
"client_x509_cert_url": "",
"private_key": "",
"private_key_id": "",
"project_id": "",
"token_uri": "",
"type": ""
},
"name": "",
"provider": "gce"
}
To get these credentials, you should have a GCE service account and a service account key generated to establish its identity. In general, a service account is an identity with which an instance or an application is running, and you need it to identify your instances and applications to other GCP services (e.g., block storage). You can find your default service account under IAM & Admin/Service accounts section of the Google Cloud Console. If there is no service account key for your account, you should generate it. The GCP will create a simple JSON file with all credentials you need to set up GCE account in Supergiant.
The file should look like this:
{
"type": "service_account",
"project_id": "supergiant",
"private_key_id": "de91341b6442ebbb8e0b996a9345229cf49f24c9",
"private_key": "YOUR PRIVATE KEY",
"client_id": "117179802345063645496",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/supergiant%40argon-producer-704.iam.gserviceaccount.com"
}
After downloading, save the file in a safe place because it cannot be restored. Just paste necessary information from this file and copy it into your Supergiant template. If everything is fine, you'll get something like this in your Supergiant "Cloud Accounts" window:
{
"id": 1,
"uuid": "527fbe39-68a0-452a-a6e8-fbc00bfcb324",
"created_at": "2018-03-06T23:34:07.402158321Z",
"updated_at": "2018-03-06T23:34:07.402158321Z",
"name": "gce",
"provider": "gce"
}
Step 8. Create Your Kube and Autoscale It
Now that you have created your brand new GCE account, you can add a Kube to your GCE virtual instance. To create one, click Kubes/New and select GCE in the dropdown list. You'll see some values pre-populated for you.
Edit whatever you like, but be sure to edit or review the following entries:
"cloud_account_name": The name of the Cloud Account you created above.
"ssh_pub_key": Your public SSH key that may be found in your GCP console Metadata/SSH keys.
"zone": The region you would like to launch your Kubernetes cluster. All GCE regions support block storage.
"master_node_size": The instance size for your Kubernetes master; the value you enter here will depend on what options are available to your Supergiant configuration JSON file; 512MB of RAM is sufficient.
"name": A name for your Kube that starts with a letter, limited to 12 characters, made of lowercase letters, numbers, and/or dashes (`-`).
"node_sizes": An array of GCE node sizes chosen from sizes contained in the Supergiant configuration file.
This list will be used to provision hardware for autoscaling Kubernetes.
My configuration looks like this:
{
"cloud_account_name": "gce",
"gce_config": {
"ssh_pub_key": "MY SSH KEY",
"zone": "us-east1-b"
},
"master_node_size": "n1-standard-1",
"name": "gcetut",
"node_sizes": [
"n1-standard-1",
"n1-standard-2",
"n1-standard-4",
"n1-standard-8"
]
}
When your edits are complete, click Create.
Step 9. Watch Supergiant Go!
After you click the button, you'll see a green spinner indicating that Supergiant is provisioning your Kube. Supergiant will orchestrate provisioning of GCE services, including block storage, security settings, networking, and more. The longest wait time will be “waiting for Kubernetes” as you wait for the GCE instance to be provisioned. The process may take up to 5-7 minutes, but be patient. When it is finished, you'll see new Kubernetes master and minion instances in your GCE console.
Step 10. Access Kubernetes
Once the provisioning process is completed, click on the Kube ID to get details. To verify Kubernetes is running, get the username, password, and master_public_ip from the Kube details in the Supergiant dashboard.
You should see a JSON file like this when you click on the newly installed Kube:
"username": "F0EUvaZDywLmlux8",
"password": "lIuSy7bBA",
"heapster_version": "v1.4.0",
"heapster_metric_resolution": "20s",
"gce_config": {
"zone": "us-east1-b",
"master_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/operations/operation-1520513482047-566e621f93e19-4176219b-118ddc9a",
"minion_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-724/zones/us-east1-b/operations/operation-1520513482872-566e62205d4c0-a588823c-461e2288",
"master_nodes": [
"https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/instances/gcetut-master-avkge"
],
"master_name": "gcetut-master-avkge",
"kube_master_count": 1,
"ssh_pub_key": "SSH Public Key",
"kubernetes_version": "1.5.1",
"etcd_discovery_url": "https://discovery.etcd.io/9ef553b5d21b4ec8f7647667918e40dc",
"master_private_ip": "10.144.0.3"
},
"master_public_ip": "104.196.192.25"
From your terminal, run the following command:
curl --insecure https://USERNAME:PASSWORD@MASTER_PUBLIC_IP/api/v1/
You should see a JSON response describing the Kubernetes API.
Congratulations! You have just launched an autoscaling Kubernetes cluster on GCE!
Teardown
If you want to delete your tutorial Kube from the Supergiant dashboard, simply click on Kubes, select the one you'd like to delete, and then choose Delete from the Actions menu. It’s as simple as that. Supergiant will clean up after itself and remove all the associated GCE services linked to the Kube.
[Less]
|
Posted
about 7 years
ago
by
Kirill
In this tutorial, we'll guide you through the process of installing Supergiant and autoscaling your Kubernetes cluster on Google Compute Engine. Your cloud hardware will autoscale according to the parameters you set to keep costs
... [More]
as low as possible. When you launch a Kube with Supergiant, the system will immediately use Supergiant’s packing algorithm specifically designed to automatically handle the work of scaling up and scaling down for you. Sounds promising? Yes, it is.
For this tutorial, you will need:
A GCE account and a running GCE Virtual Machine (VM) instance. You will need only the instances with 512MB/1CPU size to run Supergiant as a service. When creating a Kubernetes master, Supergiant internally defaults to n1-standard-1. For this tutorial, we've selected Ubuntu 16.04 Xenial as the OS to be run on the n1-standard-1 instance.
A GCP Service account with full administrator access and a secret key for your instance. Each VM instance is mounted with the default service account. However, you'll need to create a cloud service secret key manually if it does not yet exist.
SSH Public Keys for Supergiant to securely access your GCE instances
Installing Supergiant on your GCE Instance
Step 1. Connect to your Instance
To install Supergiant, you first need to connect to your GCE instance using Google Cloud Platform console or gcloud console tool. Consult a simple guide provided by the Google Cloud for the steps to do this.
Step 2. Get the Latest Supergiant Release
After you have connected to the instance, get the latest stable release of Supergiant:
sudo curl https://github.com/supergiant/... -L -o /usr/bin/supergiant
The command above downloads the latest stable release of Supergiant from GitHub and saves it to /user/bin/supergiant folder of your GCE instance.
Make sure to make the downloaded binary executable.
sudo chmod +x /usr/bin/supergiant
Step 3. Configure Supergiant
Next, configure your Supergiant service. To do this, grab the example configuration from Supergiant's GitHub repository. (We don’t recommend using these default settings in production, but the configuration will work as-is for quick-start and testing purposes.)
Execute the following command to download the configuration file:
sudo curl
https://raw.githubusercontent.... --create-dirs -o /etc/supergiant/config.json
Then use your favorite text editor to change the log and database file locations in config.json to fit your Ubuntu environment.
{
...
"sqlite_file": "/var/lib/supergiant/development.db",
...
"log_file": "/var/log/supergiant/development.log",
...
}
You can see that the configuration file also contains other useful features like Supergiant public host, port, and default parameters of GCE instances for your future Kubes.
"publish_host": "localhost",
"http_port": "8080",
"gce": [
{"name": "n1-standard-1", "ram_gib": 3.75, "cpu_cores": 1},
{"name": "n1-standard-2", "ram_gib": 7.5, "cpu_cores": 2},
{"name": "n1-standard-4", "ram_gib": 15, "cpu_cores": 4},
{"name": "n1-standard-8", "ram_gib": 30, "cpu_cores": 8}
]
Next, create the directories we just referenced so Supergiant can write its DB and log files:
sudo mkdir /var/lib/supergiant && sudo mkdir /var/log/supergiant
To start Supergiant as a service, you also need to configure Ubuntu systemd. Use your favorite editor to create the following init file in /etc/systemd/system/supergiant.service .
[Unit]
Description=Supergiant Server
After=syslog.target
After=network.target
[Service]
ExecStart=/usr/bin/supergiant --config-file /etc/supergiant/config.json
Restart=on-abort
[Install]
WantedBy=multi-user.target
Step 4. Create an Admin User
The first time Supergiant runs, the system will create an admin user and output the username and password to the console. Execute the following command in the terminal to test your configuration and create your admin user:
sudo supergiant --config-file /etc/supergiant/config.json
Then save the randomly generated password to access the Supergiant dashboard later. Type Control + C to stop the server after the user is created.
Step 5. Start your Supergiant Server
Now you are ready to start your new Supergiant system service like this:
sudo systemctl start supergiant.service
If everything is fine, the system shouldn't produce any output. To check the status of the service, you may also execute the following:
sudo systemctl status supergiant.service
It's a good idea to enable the service to start on boot-up:
sudo systemctl enable supergiant.service
Step 6. Access your Supergiant Dashboard on GCE
Supergiant serves HTTP over port 8080 by default. You need to verify that your GCE instance has http traffic enabled and 8080 port opened. If not, add new Firewall rules for port:8080 under "VPC network/Firewall rules" in your GCP console. After you verify that all needed ports are opened, simply get the public IP of your instance from GCE instances list, and put the IP followed by the port number: 8080 to access your Supergiant dashboard on its default port (or whatever port you have specified in your config.json file). After the dashboard has downloaded, put in "admin" and the "password" generated above to log in.
That’s it! You’ve successfully created a Supergiant server that runs as a service for development purposes! Check out the view of your Supergiant dashboard below.
Step 7. Link a GCE Cloud Account
You next need to create a new GCE cloud account by adding GCE cloud credentials, so click ''Cloud Accounts", then "New", and select GCE from the dropdown list. You'll see the following JSON template with placeholders for cloud credentials.
{
"credentials": {
"auth_provider_x509_cert_url": "",
"auth_uri": "",
"client_email": "",
"client_id": "",
"client_x509_cert_url": "",
"private_key": "",
"private_key_id": "",
"project_id": "",
"token_uri": "",
"type": ""
},
"name": "",
"provider": "gce"
}
To get these credentials, you should have a GCE service account and a service account key generated to establish its identity. In general, a service account is an identity with which an instance or an application is running, and you need it to identify your instances and applications to other GCP services (e.g., block storage). You can find your default service account under IAM & Admin/Service accounts section of the Google Cloud Console. If there is no service account key for your account, you should generate it. The GCP will create a simple JSON file with all credentials you need to set up GCE account in Supergiant.
The file should look like this:
{
"type": "service_account",
"project_id": "supergiant",
"private_key_id": "de91341b6442ebbb8e0b996a9345229cf49f24c9",
"private_key": "YOUR PRIVATE KEY",
"client_id": "117179802345063645496",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/supergiant%40argon-producer-704.iam.gserviceaccount.com"
}
After downloading, save the file in a safe place because it cannot be restored. Just paste necessary information from this file and copy it into your Supergiant template. If everything is fine, you'll get something like this in your Supergiant "Cloud Accounts" window:
{
"id": 1,
"uuid": "527fbe39-68a0-452a-a6e8-fbc00bfcb324",
"created_at": "2018-03-06T23:34:07.402158321Z",
"updated_at": "2018-03-06T23:34:07.402158321Z",
"name": "gce",
"provider": "gce"
}
Step 8. Create Your Kube and Autoscale It
Now that you have created your brand new GCE account, you can add a Kube to your GCE virtual instance. To create one, click Kubes/New and select GCE in the dropdown list. You'll see some values pre-populated for you.
Edit whatever you like, but be sure to edit or review the following entries:
"cloud_account_name": The name of the Cloud Account you created above.
"ssh_pub_key": Your public SSH key that may be found in your GCP console Metadata/SSH keys.
"zone": The region you would like to launch your Kubernetes cluster. All GCE regions support block storage.
"master_node_size": The instance size for your Kubernetes master; the value you enter here will depend on what options are available to your Supergiant configuration JSON file; 512MB of RAM is sufficient.
"name": A name for your Kube that starts with a letter, limited to 12 characters, made of lowercase letters, numbers, and/or dashes (`-`).
"node_sizes": An array of GCE node sizes chosen from sizes contained in the Supergiant configuration file.
This list will be used to provision hardware for autoscaling Kubernetes.
My configuration looks like this:
{
"cloud_account_name": "gce",
"gce_config": {
"ssh_pub_key": "MY SSH KEY",
"zone": "us-east1-b"
},
"master_node_size": "n1-standard-1",
"name": "gcetut",
"node_sizes": [
"n1-standard-1",
"n1-standard-2",
"n1-standard-4",
"n1-standard-8"
]
}
When your edits are complete, click Create.
Step 9. Watch Supergiant Go!
After you click the button, you'll see a green spinner indicating that Supergiant is provisioning your Kube. Supergiant will orchestrate provisioning of GCE services, including block storage, security settings, networking, and more. The longest wait time will be “waiting for Kubernetes” as you wait for the GCE instance to be provisioned. The process may take up to 5-7 minutes, but be patient. When it is finished, you'll see new Kubernetes master and minion instances in your GCE console.
Step 10. Access Kubernetes
Once the provisioning process is completed, click on the Kube ID to get details. To verify Kubernetes is running, get the username, password, and master_public_ip from the Kube details in the Supergiant dashboard.
You should see a JSON file like this when you click on the newly installed Kube:
"username": "F0EUvaZDywLmlux8",
"password": "lIuSy7bBA",
"heapster_version": "v1.4.0",
"heapster_metric_resolution": "20s",
"gce_config": {
"zone": "us-east1-b",
"master_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/operations/operation-1520513482047-566e621f93e19-4176219b-118ddc9a",
"minion_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-724/zones/us-east1-b/operations/operation-1520513482872-566e62205d4c0-a588823c-461e2288",
"master_nodes": [
"https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/instances/gcetut-master-avkge"
],
"master_name": "gcetut-master-avkge",
"kube_master_count": 1,
"ssh_pub_key": "SSH Public Key",
"kubernetes_version": "1.5.1",
"etcd_discovery_url": "https://discovery.etcd.io/9ef553b5d21b4ec8f7647667918e40dc",
"master_private_ip": "10.144.0.3"
},
"master_public_ip": "104.196.192.25"
From your terminal, run the following command:
curl --insecure https://USERNAME:PASSWORD@MASTER_PUBLIC_IP/api/v1/
You should see a JSON response describing the Kubernetes API.
Congratulations! You have just launched an autoscaling Kubernetes cluster on GCE!
Teardown
If you want to delete your tutorial Kube from the Supergiant dashboard, simply click on Kubes, select the one you'd like to delete, and then choose Delete from the Actions menu. It’s as simple as that. Supergiant will clean up after itself and remove all the associated GCE services linked to the Kube.
[Less]
|
Posted
about 7 years
ago
by
Kirill
In this tutorial, we'll guide you through the process of installing Supergiant and autoscaling your Kubernetes cluster on Google Compute Engine. Your cloud hardware will autoscale according to the parameters you set to keep costs
... [More]
as low as possible. When you launch a Kube with Supergiant, the system will immediately use Supergiant’s packing algorithm specifically designed to automatically handle the work of scaling up and scaling down for you. Sounds promising? Yes, it is.
For this tutorial, you will need:
A GCE account and a running GCE Virtual Machine (VM) instance. You will need only the instances with 512MB/1CPU size to run Supergiant as a service. When creating a Kubernetes master, Supergiant internally defaults to n1-standard-1. For this tutorial, we've selected Ubuntu 16.04 Xenial as the OS to be run on the n1-standard-1 instance.
A GCP Service account with full administrator access and a secret key for your instance. Each VM instance is mounted with the default service account. However, you'll need to create a cloud service secret key manually if it does not yet exist.
SSH Public Keys for Supergiant to securely access your GCE instances
Installing Supergiant on your GCE Instance
Step 1. Connect to your Instance
To install Supergiant, you first need to connect to your GCE instance using Google Cloud Platform console or gcloud console tool. Consult a simple guide provided by the Google Cloud for the steps to do this.
Step 2. Get the Latest Supergiant Release
After you have connected to the instance, get the latest stable release of Supergiant:
curl https://github.com/supergiant/supergiant/releases/download/v0.15.6/supergiant-server-linux-amd64 -L -o /usr/bin/supergiant
The command above downloads the latest stable release of Supergiant from GitHub and saves it to /user/bin/supergiant folder of your GCE instance.
Make sure to make the downloaded binary executable.
sudo chmod +x /usr/bin/supergiant
Step 3. Configure Supergiant
Next, configure your Supergiant service. To do this, grab the example configuration from Supergiant's GitHub repository. (We don’t recommend using these default settings in production, but the configuration will work as-is for quick-start and testing purposes.)
Execute the following command to download the configuration file:
sudo curl https://raw.githubusercontent.com/supergiant/supergiant/master/config/config.json.example --create-dirs -o /etc/supergiant/config.json
Then use your favorite text editor to change the log and database file locations in config.json to fit your Ubuntu environment.
{
...
"sqlite_file": "/var/lib/supergiant/development.db",
...
"log_file": "/var/log/supergiant/development.log",
...
}
You can see that the configuration file also contains other useful features like Supergiant public host, port, and default parameters of GCE instances for your future Kubes.
"publish_host": "localhost",
"http_port": "8080",
"gce": [
{"name": "n1-standard-1", "ram_gib": 3.75, "cpu_cores": 1},
{"name": "n1-standard-2", "ram_gib": 7.5, "cpu_cores": 2},
{"name": "n1-standard-4", "ram_gib": 15, "cpu_cores": 4},
{"name": "n1-standard-8", "ram_gib": 30, "cpu_cores": 8}
]
Next, create the directories we just referenced so Supergiant can write its DB and log files:
sudo mkdir /var/lib/supergiant && sudo mkdir /var/log/supergiant
To start Supergiant as a service, you also need to configure Ubuntu systemd. Use your favorite editor to create the following init file in /etc/systemd/system/supergiant.service .
[Unit]
Description=Supergiant Server
After=syslog.target
After=network.target
[Service]
ExecStart=/usr/bin/supergiant --config-file /etc/supergiant/config.json
Restart=on-abort
[Install]
WantedBy=multi-user.target
Step 4. Create an Admin User
The first time Supergiant runs, the system will create an admin user and output the username and password to the console. Execute the following command in the terminal to test your configuration and create your admin user:
sudo supergiant --config-file /etc/supergiant/config.json
Then save the randomly generated password to access the Supergiant dashboard later. Type Control + C to stop the server after the user is created.
Step 5. Start your Supergiant Server
Now you are ready to start your new Supergiant system service like this:
sudo systemctl start supergiant.service
If everything is fine, the system shouldn't produce any output. To check the status of the service, you may also execute the following:
sudo systemctl status supergiant.service
It's a good idea to enable the service to start on boot-up:
sudo systemctl enable supergiant.service
Step 6. Access your Supergiant Dashboard on GCE
Supergiant serves HTTP over port 8080 by default. You need to verify that your GCE instance has http traffic enabled and 8080 port opened. If not, add new Firewall rules for port:8080 under "VPC network/Firewall rules" in your GCP console. After you verify that all needed ports are opened, simply get the public IP of your instance from GCE instances list, and put the IP followed by the port number: 8080 to access your Supergiant dashboard on its default port (or whatever port you have specified in your config.json file). After the dashboard has downloaded, put in "admin" and the "password" generated above to log in.
That’s it! You’ve successfully created a Supergiant server that runs as a service for development purposes! Check out the view of your Supergiant dashboard below.
Step 7. Link a GCE Cloud Account
You next need to create a new GCE cloud account by adding GCE cloud credentials, so click ''Cloud Accounts", then "New", and select GCE from the dropdown list. You'll see the following JSON template with placeholders for cloud credentials.
{
"credentials": {
"auth_provider_x509_cert_url": "",
"auth_uri": "",
"client_email": "",
"client_id": "",
"client_x509_cert_url": "",
"private_key": "",
"private_key_id": "",
"project_id": "",
"token_uri": "",
"type": ""
},
"name": "",
"provider": "gce"
}
To get these credentials, you should have a GCE service account and a service account key generated to establish its identity. In general, a service account is an identity with which an instance or an application is running, and you need it to identify your instances and applications to other GCP services (e.g., block storage). You can find your default service account under IAM & Admin/Service accounts section of the Google Cloud Console. If there is no service account key for your account, you should generate it. The GCP will create a simple JSON file with all credentials you need to set up GCE account in Supergiant.
The file should look like this:
{
"type": "service_account",
"project_id": "supergiant",
"private_key_id": "de91341b6442ebbb8e0b996a9345229cf49f24c9",
"private_key": "YOUR PRIVATE KEY",
"client_id": "117179802345063645496",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/supergiant%40argon-producer-704.iam.gserviceaccount.com"
}
After downloading, save the file in a safe place because it cannot be restored. Just paste necessary information from this file and copy it into your Supergiant template. If everything is fine, you'll get something like this in your Supergiant "Cloud Accounts" window:
{
"id": 1,
"uuid": "527fbe39-68a0-452a-a6e8-fbc00bfcb324",
"created_at": "2018-03-06T23:34:07.402158321Z",
"updated_at": "2018-03-06T23:34:07.402158321Z",
"name": "gce",
"provider": "gce"
}
Step 8. Create Your Kube and Autoscale It
Now that you have created your brand new GCE account, you can add a Kube to your GCE virtual instance. To create one, click Kubes/New and select GCE in the dropdown list. You'll see some values pre-populated for you.
Edit whatever you like, but be sure to edit or review the following entries:
"cloud_account_name": The name of the Cloud Account you created above.
"ssh_pub_key": Your public SSH key that may be found in your GCP console Metadata/SSH keys.
"zone": The region you would like to launch your Kubernetes cluster. All GCE regions support block storage.
"master_node_size": The instance size for your Kubernetes master; the value you enter here will depend on what options are available to your Supergiant configuration JSON file; 512MB of RAM is sufficient.
"name": A name for your Kube that starts with a letter, limited to 12 characters, made of lowercase letters, numbers, and/or dashes (`-`).
"node_sizes": An array of GCE node sizes chosen from sizes contained in the Supergiant configuration file.
This list will be used to provision hardware for autoscaling Kubernetes.
My configuration looks like this:
{
"cloud_account_name": "gce",
"gce_config": {
"ssh_pub_key": "MY SSH KEY",
"zone": "us-east1-b"
},
"master_node_size": "n1-standard-1",
"name": "gcetut",
"node_sizes": [
"n1-standard-1",
"n1-standard-2",
"n1-standard-4",
"n1-standard-8"
]
}
When your edits are complete, click Create.
Step 9. Watch Supergiant Go!
After you click the button, you'll see a green spinner indicating that Supergiant is provisioning your Kube. Supergiant will orchestrate provisioning of GCE services, including block storage, security settings, networking, and more. The longest wait time will be “waiting for Kubernetes” as you wait for the GCE instance to be provisioned. The process may take up to 5-7 minutes, but be patient. When it is finished, you'll see new Kubernetes master and minion instances in your GCE console.
Step 10. Access Kubernetes
Once the provisioning process is completed, click on the Kube ID to get details. To verify Kubernetes is running, get the username, password, and master_public_ip from the Kube details in the Supergiant dashboard.
You should see a JSON file like this when you click on the newly installed Kube:
"username": "F0EUvaZDywLmlux8",
"password": "lIuSy7bBA",
"heapster_version": "v1.4.0",
"heapster_metric_resolution": "20s",
"gce_config": {
"zone": "us-east1-b",
"master_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/operations/operation-1520513482047-566e621f93e19-4176219b-118ddc9a",
"minion_instance_group": "https://www.googleapis.com/compute/v1/projects/argon-producer-724/zones/us-east1-b/operations/operation-1520513482872-566e62205d4c0-a588823c-461e2288",
"master_nodes": [
"https://www.googleapis.com/compute/v1/projects/argon-producer-704/zones/us-east1-b/instances/gcetut-master-avkge"
],
"master_name": "gcetut-master-avkge",
"kube_master_count": 1,
"ssh_pub_key": "SSH Public Key",
"kubernetes_version": "1.5.1",
"etcd_discovery_url": "https://discovery.etcd.io/9ef553b5d21b4ec8f7647667918e40dc",
"master_private_ip": "10.144.0.3"
},
"master_public_ip": "104.196.192.25"
From your terminal, run the following command:
curl --insecure https://USERNAME:PASSWORD@MASTER_PUBLIC_IP/api/v1/
You should see a JSON response describing the Kubernetes API.
Congratulations! You have just launched an autoscaling Kubernetes cluster on GCE!
Teardown
If you want to delete your tutorial Kube from the Supergiant dashboard, simply click on Kubes, select the one you'd like to delete, and then choose Delete from the Actions menu. It’s as simple as that. Supergiant will clean up after itself and remove all the associated GCE services linked to the Kube.
[Less]
|
Posted
over 7 years
ago
by
adam
If you’re a developer or admin, you understand how containers can help make your work easier from a technical standpoint. But how can you convince your boss or management that containers are useful from a business standpoint?
... [More]
It's easy to jump right into the technical benefits of Containers, Docker or Orchestration tools like Docker Swarm, Kubernetes etc. However, many times with management you will quickly notice their reluctant eyes quickly glaze over or roll back in their heads and give you that monotonous and humdrum look as you excitedly describe how to push a container or explain your favorite CI/CD workflows.
Getting buy-in for containers from higher-ups is important, of course. No matter how great a technology seems to the people who use it in the trenches, taking advantage of it on a large scale usually is not feasible unless you have support from the people making your business decisions.
Having that challenge in mind, here are a few reasons you can give your management for adopting containers:
Standardisation and Hardware Optimisation
Containers help us make the most of the servers we already own. That saves money by reducing the need to purchase new hardware. Regardless if a container is running in a Cloud or on a laptop, Docker makes it easy to standardize our environments from Development to Production. Package your application and all it's dependencies into a single container removes common inconsistencies that are present between Development and Production and allows you to run the same container in any environment.
Easily reproduce production or test environments, setup and distribute new environments. This allows for standardized environments which can be easily managed and updated.
Containers are open source and Free
The most important management decision when selecting a new product if if it would lead to reduced costs yet raising profits while staying within budget .
They may ask how can Docker reduce costs ?
Docker enables businesses to optimize infrastructure resources, standardize environments and reduce time to market. Unlike some old-generation infrastructure technologies, such as VMware, containers are open source. That helps to reduce acquisition costs. It also eliminates the challenge of vendor lock-in. Flexibility is also a huge cost savings tool. Docker enables users to run their containers wherever they please either in a Cloud or on-premise.
The management asks how can we save money with flexibility?
No more vendor or Cloud lock-ins. Move your containers to any Docker host on any infrastructure from Amazon, Google, or Azure to the server under your desk.
The learning curve is manageable
It would be dishonest to say there is no learning curve required to migrate to containers. You have to containerize your existing applications, and learn to work within a new type of environment. That takes time. But it’s not impossible. Most people with experience working with Linux and virtual machines will be able to pick up the container paradigm easily enough.
In order to attract top IT talent, we need to be using technology they want to use. For many skilled developers and admins, that technology includes containers.
Containers provide flexible deployment
Containers are an ideal hosting environment for continuous delivery pipelines. That makes them beneficial for organizations that want to push out app changes very quickly—which is important for keeping ahead of the competition.
It is no longer required to run a separate VM per application. Now, Docker empowers you to run as many containers as your infrastructure can handle.
If you had a Database as a Service where every Database was provisioned inside it's own VM i.e., upwards of 200 VM's running 200 Databases. Is is not only expensive both from a resource and cost perspectives, but is also extremely inefficient.
If you move to Docker, you will be able to consolidate the 200 Databases into 200 Docker containers running on just 20 VM's. This helps in reducing costs and at the same time infrastructure becomes easier to manage.
Containers do not require us to use any particular type of programming framework or server. We can write in whichever language we want, and run containers on any version of Linux—and, now, on Windows, too (although Docker support for Windows is still not really production-ready). This flexibility is important for businesses that want to avoid being locked into particular frameworks.
Consistency
Containers help make our testing, staging and deployment environments identical. That assures stability and minimises the chance of pushing buggy software into production which leads to angry customers and leaves organisation reputation impacted.
Docker also easily integrates into all the popular Continuous Integration/Continuous Delivery pipelines enabling Development and Testing to deliver seamless updates to production much more efficient. However, in the event of a problem, we can now easily roll back to the previous version as quick as as it was installed initially. Even in failure situations it is easy and quick to rollback.
Easily adapts to production infrastructure
We don’t have to buy new servers or migrate to a new cloud host to use containers. We can set up a Docker environment pretty much anywhere, including on the infrastructure we already own or rent in the cloud.
Docker reduces the amount of time it takes to install an application, scale to meet customer demands or simply start new containers. Every precious second that we can save is also money saved. We can now thus determine when our product is available in the market and not our infrastructure.
Give it a Whirl
Supergiant is a data center management system that is built on Kubernetes. It’s purpose is to help make managing stateful apps in a containerized environment easier and more performant. We support all uses of Kubernetes, as it is an awesome platform. We wanted to make it easier to use by making stateful, distributed apps easier to manage out of the box. Of course, if you want to reach your Kubernetes cluster within Supergiant, you can. Supergiant gives you tools to manage persistent storage that allow you to migrate data, change hard drive size, type, etc. on the fly. Supergiant also augments Kubernetes hardware management by squeezing even better utilization out of existing hardware and by autoscaling hardware where and when it is needed.
Get started with Supergiant today and we welcome your feedback:
1. Give it a Try – Run containers on auto scaling, self-healing infrastructure. Get dedicated support and development consulting from CNCF-certified administrators. Our support scales with you from development to production. From SMB to Enterprise, we are with you the whole way.
2. Quickstart Tutorials – Install Supergiant by following these short tutorials.
3. Documentation
4. FAQs - Supergiant
5. Supergiant Community - We want our developer community to be a safe place to share and discuss ideas for everyone. Community interactions are governed by contributor covenant.
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
Posted
over 7 years
ago
by
adam
If you’re a developer or admin, you understand how containers can help make your work easier from a technical standpoint. But how can you convince your boss or management that containers are useful from a business standpoint?
... [More]
It's easy to jump right into the technical benefits of Containers, Docker or Orchestration tools like Docker Swarm, Kubernetes etc. However, many times with management you will quickly notice their reluctant eyes quickly glaze over or roll back in their heads and give you that monotonous and humdrum look as you excitedly describe how to push a container or explain your favorite CI/CD workflows.
Getting buy-in for containers from higher-ups is important, of course. No matter how great a technology seems to the people who use it in the trenches, taking advantage of it on a large scale usually is not feasible unless you have support from the people making your business decisions.
Having that challenge in mind, here are a few reasons you can give your management for adopting containers:
Standardisation and Hardware Optimisation
Containers help us make the most of the servers we already own. That saves money by reducing the need to purchase new hardware. Regardless if a container is running in a Cloud or on a laptop, Docker makes it easy to standardize our environments from Development to Production. Package your application and all it's dependencies into a single container removes common inconsistencies that are present between Development and Production and allows you to run the same container in any environment.
Easily reproduce production or test environments, setup and distribute new environments. This allows for standardized environments which can be easily managed and updated.
Containers are open source and Free
The most important management decision when selecting a new product if if it would lead to reduced costs yet raising profits while staying within budget .
They may ask how can Docker reduce costs ?
Docker enables businesses to optimize infrastructure resources, standardize environments and reduce time to market. Unlike some old-generation infrastructure technologies, such as VMware, containers are open source. That helps to reduce acquisition costs. It also eliminates the challenge of vendor lock-in. Flexibility is also a huge cost savings tool. Docker enables users to run their containers wherever they please either in a Cloud or on-premise.
The management asks how can we save money with flexibility?
No more vendor or Cloud lock-ins. Move your containers to any Docker host on any infrastructure from Amazon, Google, or Azure to the server under your desk.
The learning curve is manageable
It would be dishonest to say there is no learning curve required to migrate to containers. You have to containerize your existing applications, and learn to work within a new type of environment. That takes time. But it’s not impossible. Most people with experience working with Linux and virtual machines will be able to pick up the container paradigm easily enough.
In order to attract top IT talent, we need to be using technology they want to use. For many skilled developers and admins, that technology includes containers.
Containers provide flexible deployment
Containers are an ideal hosting environment for continuous delivery pipelines. That makes them beneficial for organizations that want to push out app changes very quickly—which is important for keeping ahead of the competition.
It is no longer required to run a separate VM per application. Now, Docker empowers you to run as many containers as your infrastructure can handle.
If you had a Database as a Service where every Database was provisioned inside it's own VM i.e., upwards of 200 VM's running 200 Databases. Is is not only expensive both from a resource and cost perspectives, but is also extremely inefficient.
If you move to Docker, you will be able to consolidate the 200 Databases into 200 Docker containers running on just 20 VM's. This helps in reducing costs and at the same time infrastructure becomes easier to manage.
Containers do not require us to use any particular type of programming framework or server. We can write in whichever language we want, and run containers on any version of Linux—and, now, on Windows, too (although Docker support for Windows is still not really production-ready). This flexibility is important for businesses that want to avoid being locked into particular frameworks.
Consistency
Containers help make our testing, staging and deployment environments identical. That assures stability and minimises the chance of pushing buggy software into production which leads to angry customers and leaves organisation reputation impacted.
Docker also easily integrates into all the popular Continuous Integration/Continuous Delivery pipelines enabling Development and Testing to deliver seamless updates to production much more efficient. However, in the event of a problem, we can now easily roll back to the previous version as quick as as it was installed initially. Even in failure situations it is easy and quick to rollback.
Easily adapts to production infrastructure
We don’t have to buy new servers or migrate to a new cloud host to use containers. We can set up a Docker environment pretty much anywhere, including on the infrastructure we already own or rent in the cloud.
Docker reduces the amount of time it takes to install an application, scale to meet customer demands or simply start new containers. Every precious second that we can save is also money saved. We can now thus determine when our product is available in the market and not our infrastructure.
Give it a Whirl
Supergiant is a data center management system that is built on Kubernetes. It’s purpose is to help make managing stateful apps in a containerized environment easier and more performant. We support all uses of Kubernetes, as it is an awesome platform. We wanted to make it easier to use by making stateful, distributed apps easier to manage out of the box. Of course, if you want to reach your Kubernetes cluster within Supergiant, you can. Supergiant gives you tools to manage persistent storage that allow you to migrate data, change hard drive size, type, etc. on the fly. Supergiant also augments Kubernetes hardware management by squeezing even better utilization out of existing hardware and by autoscaling hardware where and when it is needed.
Get started with Supergiant today and we welcome your feedback:
1. Give it a Try – Run containers on auto scaling, self-healing infrastructure. Get dedicated support and development consulting from CNCF-certified administrators. Our support scales with you from development to production. From SMB to Enterprise, we are with you the whole way.
2. Quickstart Tutorials – Install Supergiant by following these short tutorials.
3. Documentation
4. FAQs - Supergiant
5. Supergiant Community - We want our developer community to be a safe place to share and discuss ideas for everyone. Community interactions are governed by contributor covenant.
Further Reading
Tutorials: How To Install Supergiant
Why Is the Supergiant Packing Algorithm Unique? How Does It Save Me Money?
How to Calculate Kubernetes Cost Savings
Kubernetes Series: Understanding Why Container Architecture is Important to the Future of Your Business
Top Reasons Businesses Should Move to Kubernetes Now
[Less]
|
Posted
over 7 years
ago
by
adam
Supergiant will be at KubeCon + CloudNativeCon in Austin Dec 6 - 8 to talk about Kubernetes training and support and to introduce our new open-source Kubernetes as a Service product.
KubeCon + CloudNativeCon gathers all CNCF
... [More]
projects under one roof. Join us, and help us further the advancement of cloud native computing.
Supergiant Cloud is fully-managed, auto-scaling Kubernetes. Use Supergiant's intuitive UI to launch containers and Helm Charts just like apps. Add a custom registry and launch custom projects without fuss.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
The Supergiant team has several CKA’s (Certified Kubernetes Administrators) and is among the first selected by the CNCF as official Service Providers for our extensive expertise, Kubernetes support capacity, and contributions to the Kubernetes ecosystem.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, Open Stack, Google Cloud, and Packet. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
Posted
over 7 years
ago
by
adam
Supergiant will be at KubeCon + CloudNativeCon in Austin Dec 6 - 8 to talk about Kubernetes training and support and to introduce our new open-source Kubernetes as a Service product.
KubeCon + CloudNativeCon gathers all CNCF
... [More]
projects under one roof. Join us, and help us further the advancement of cloud native computing.
Supergiant Cloud is fully-managed, auto-scaling Kubernetes. Use Supergiant's intuitive UI to launch containers and Helm Charts just like apps. Add a custom registry and launch custom projects without fuss.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
The Supergiant team has several CKA’s (Certified Kubernetes Administrators) and is among the first selected by the CNCF as official Service Providers for our extensive expertise, Kubernetes support capacity, and contributions to the Kubernetes ecosystem.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, Open Stack, Google Cloud, and Packet. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
Posted
over 7 years
ago
by
adam
Supergiant will be at KubeCon + CloudNativeCon in Austin Dec 6 - 8 to talk about Kubernetes training and support and to introduce our new open-source Kubernetes as a Service product.
KubeCon + CloudNativeCon gathers all CNCF
... [More]
projects under one roof. Join us, and help us further the advancement of cloud native computing.
Supergiant Cloud is fully-managed, auto-scaling Kubernetes. Use Supergiant's intuitive UI to launch containers and Helm Charts just like apps. Add a custom registry and launch custom projects without fuss.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
The Supergiant team has several CKA’s (Certified Kubernetes Administrators) and is among the first selected by the CNCF as official Service Providers for our extensive expertise, Kubernetes support capacity, and contributions to the Kubernetes ecosystem.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, Open Stack, Google Cloud, and Packet. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
Posted
over 7 years
ago
by
adam
Supergiant will join other containerization experts at AWS re:Invent 2017 in Las Vegas on November 27 - December 1to introduce our new open-source Kubernetes as a Service product.
Connect with peers and cloud experts, engage in
... [More]
hands-on labs and bootcamps, and learn how AWS can improve developer productivity, network security, and application performance while keeping infrastructure costs low.
Supergiant Cloud is fully-managed, auto-scaling Kubernetes. Use Supergiant's intuitive UI to launch containers and Helm Charts just like apps. Add a custom registry and launch custom projects without fuss.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
The Supergiant team has three CKA's (Certified Kubernetes Administrators) and is among the first selected by the CNCF as official Service Providers for our extensive expertise, enterprise Kubernetes support capacity, and contributions to the Kubernetes ecosystem.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, Open Stack, Google Cloud, and Packet. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|
Posted
over 7 years
ago
by
adam
Supergiant will join other containerization experts at AWS re:Invent 2017 in Las Vegas to introduce their new open-source Kubernetes as a Service product on November 27 - December 1.
Connect with peers and cloud experts, engage
... [More]
in hands-on labs and bootcamps, and learn how AWS can improve developer productivity, network security, and application performance while keeping infrastructure costs low.
Supergiant Cloud is fully-managed, auto-scaling Kubernetes. Use Supergiant's intuitive UI to launch containers and Helm Charts just like apps. Add a custom registry and launch custom projects without fuss.
Supergiant uses a packing algorithm that typically results in 25% lower compute-hour resources. It also enhances predictability, enabling customers with fluctuating resource needs to more fully leverage forward pricing such as AWS Reserved Instances.
The Supergiant team is among the first selected by the CNCF as official Service Providers for our extensive expertise, enterprise Kubernetes support capacity, and contributions to the Kubernetes ecosystem.
Supergiant is available on Github. It is free to download and usable under an Apache 2 license. It currently works with Amazon Web Services, Digital Ocean, Open Stack, Google Cloud, and Packet. Supergiant is a member and contributor to The Linux Foundation and the Cloud Native Computing Foundation.
[Less]
|