355 500 произведений, 25 200 авторов.

Электронная библиотека книг » Eugeny Shtoltc » IT Cloud » Текст книги (страница 8)
IT Cloud
  • Текст добавлен: 10 мая 2021, 21:01

Текст книги "IT Cloud"


Автор книги: Eugeny Shtoltc



сообщить о нарушении

Текущая страница: 8 (всего у книги 10 страниц)

}

}

network_interface {

network = "default"

access_config {}

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create

Terraform will perform the following actions:

# google_compute_instance.cluster will be created

+ resource "google_compute_instance" "cluster" {

+ can_ip_forward = false

+ cpu_platform = (known after apply)

+ deletion_protection = false

+ guest_accelerator = (known after apply)

+ id = (known after apply)

+ instance_id = (known after apply)

+ label_fingerprint = (known after apply)

+ machine_type = "f1-micro"

+ metadata_fingerprint = (known after apply)

+ name = "cluster"

+ project = (known after apply)

+ self_link = (known after apply)

+ tags_fingerprint = (known after apply)

+ zone = "europe-north1-a"

+ boot_disk {

+ auto_delete = true

+ device_name = (known after apply)

+ disk_encryption_key_sha256 = (known after apply)

+ source = (known after apply)

+ initialize_params {

+ image = "debian-cloud / debian-9"

+ size = (known after apply)

+ type = (known after apply)

}

}

+ network_interface {

+ address = (known after apply)

+ name = (known after apply)

+ network = "default"

+ network_ip = (known after apply)

+ subnetwork = (known after apply)

+ subnetwork_project = (known after apply)

+ access_config {

+ assigned_nat_ip = (known after apply)

+ nat_ip = (known after apply)

+ network_tier = (known after apply)

}

}

+ scheduling {

+ automatic_restart = (known after apply)

+ on_host_maintenance = (known after apply)

+ preemptible = (known after apply)

+ node_affinities {

+ key = (known after apply)

+ operator = (known after apply)

+ values = (known after apply)

}

}

}

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

google_compute_instance.cluster: Creating …

google_compute_instance.cluster: Still creating … [10s elapsed]

google_compute_instance.cluster: Creation complete after 11s [id = cluster]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Add a public static IP address and SSH key to the node:

essh @ kubernetes-master: ~ / node-cluster $ ssh-keygen -f node-cluster

Generating public / private rsa key pair.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in node-cluster.

Your public key has been saved in node-cluster.pub.

The key fingerprint is:

SHA256: vUhDe7FOzykE5BSLOIhE7Xt9o + AwgM4ZKOCW4nsLG58 essh @ kubernetes-master

The key's randomart image is:

+ – [RSA 2048] – +

| .o. +. |

| o. o. =. |

| * + o. =. |

| = *. … ... + o |

| B +. … S * |

| = + oo X +. |

| o. =. + = + |

| . = .... … |

| ..E. |

+ – [SHA256] – +

essh @ kubernetes-master: ~ / node-cluster $ ls node-cluster.pub

node-cluster.pub

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

provider "google" {

credentials = "$ {file (" kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}

resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"

boot_disk {

initialize_params {

image = "debian-cloud / debian-9"

}

}

metadata = {

ssh-keys = "essh: $ {file (" ./ node-cluster.pub ")}"

}

network_interface {

network = "default"

access_config {

nat_ip = "$ {google_compute_address.static-ip-address.address}"

}

}

} essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

Let's check the SSH connection to the server:

essh @ kubernetes-master: ~ / node-cluster $ ssh -i ./node-cluster [email protected]

The authenticity of host '35 .228.82.222 (35.228.82.222) 'can't be established.

ECDSA key fingerprint is SHA256: o7ykujZp46IF + eu7SaIwXOlRRApiTY1YtXQzsGwO18A.

Are you sure you want to continue connecting (yes / no)? yes

Warning: Permanently added '35 .228.82.222 '(ECDSA) to the list of known hosts.

Linux cluster 4.9.0-9-amd64 # 1 SMP Debian 4.9.168-1 + deb9u2 (2019-05-13) x86_64

The programs included with the Debian GNU / Linux system are free software;

the exact distribution terms for each program are described in the

individual files in / usr / share / doc / * / copyright.

Debian GNU / Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

essh @ cluster: ~ $ ls

essh @ cluster: ~ $ exit

logout

Connection to 35.228.82.222 closed.

Install packages:

essh @ kubernetes-master: ~ / node-cluster $ curl https://sdk.cloud.google.com | bash

essh @ kubernetes-master: ~ / node-cluster $ exec -l $ SHELL

essh @ kubernetes-master: ~ / node-cluster $ gcloud init

Let's choose a project:

You are logged in as: [[email protected]].

Pick cloud project to use:

[1] agile-aleph-203917

[2] node-cluster-243923

[3] essch

[4] Create a new project

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 4, or a value present in the list: 2

Your current project has been set to: [node-cluster-243923].

Let's choose a zone:

[50] europe-north1-a

Did not print [12] options.

Too many options [62]. Enter "list" at prompt to print choices fully.

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 62, or a value present in the list: 50

essh @ kubernetes-master: ~ / node-cluster $ PROJECT_I = "node-cluster-243923"

essh @ kubernetes-master: ~ / node-cluster $ echo $ PROJECT_I

node-cluster-243923

essh @ kubernetes-master: ~ / node-cluster $ export GOOGLE_APPLICATION_CREDENTIALS = $ HOME / node-cluster / kubernetes_key.json

essh @ kubernetes-master: ~ / node-cluster $ sudo docker-machine create –driver google –google-project $ PROJECT_ID vm01

sudo export GOOGLE_APPLICATION_CREDENTIALS = $ HOME / node-cluster / kubernetes_key.json docker-machine create –driver google –google-project $ PROJECT_ID vm01

// https://docs.docker.com/machine/drivers/gce/

// https://github.com/docker/machine/issues/4722

essh @ kubernetes-master: ~ / node-cluster $ gcloud config list

[compute]

region = europe-north1

zone = europe-north1-a

[core]

account = [email protected]

disable_usage_reporting = False

project = node-cluster-243923

Your active configuration is: [default]

Let's add copying the file and executing the script:

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

provider "google" {

credentials = "$ {file (" kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}

resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"

boot_disk {

initialize_params {

image = "debian-cloud / debian-9"

}

}

metadata = {

ssh-keys = "essh: $ {file (" ./ node-cluster.pub ")}"

}

network_interface {

network = "default"

access_config {

nat_ip = "$ {google_compute_address.static-ip-address.address}"

}

}

}

resource "null_resource" "cluster" {

triggers = {

cluster_instance_ids = "$ {join (", ", google_compute_instance.cluster. *. id)}"

}

connection {

host = "$ {google_compute_address.static-ip-address.address}"

type = "ssh"

user = "essh"

timeout = "2m"

private_key = "$ {file (" ~ / node-cluster / node-cluster ")}"

# agent = "false"

}

provisioner "file" {

source = "client.js"

destination = "~ / client.js"

}

provisioner "remote-exec" {

inline = [

"cd ~ && echo 1> test.txt"

]

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

google_compute_address.static-ip-address: Creating …

google_compute_address.static-ip-address: Creation complete after 5s [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_instance.cluster: Creating …

google_compute_instance.cluster: Still creating … [10s elapsed]

google_compute_instance.cluster: Creation complete after 12s [id = cluster]

null_resource.cluster: Creating …

null_resource.cluster: Provisioning with 'file' …

null_resource.cluster: Provisioning with 'remote-exec' …

null_resource.cluster (remote-exec): Connecting to remote host via SSH …

null_resource.cluster (remote-exec): Host: 35.228.82.222

null_resource.cluster (remote-exec): User: essh

null_resource.cluster (remote-exec): Password: false

null_resource.cluster (remote-exec): Private key: true

null_resource.cluster (remote-exec): Certificate: false

null_resource.cluster (remote-exec): SSH Agent: false

null_resource.cluster (remote-exec): Checking Host Key: false

null_resource.cluster (remote-exec): Connected!

null_resource.cluster: Creation complete after 7s [id = 816586071607403364]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

esschtolts @ cluster: ~ $ ls / home / essh /

client.js test.txt

[sudo] password for essh:

google_compute_address.static-ip-address: Refreshing state … [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_instance.cluster: Refreshing state … [id = cluster]

null_resource.cluster: Refreshing state … [id = 816586071607403364]

Enter a value: yes

null_resource.cluster: Destroying … [id = 816586071607403364]

null_resource.cluster: Destruction complete after 0s

google_compute_instance.cluster: Destroying … [id = cluster]

google_compute_instance.cluster: Still destroying … [id = cluster, 10s elapsed]

google_compute_instance.cluster: Still destroying … [id = cluster, 20s elapsed]

google_compute_instance.cluster: Destruction complete after 27s

google_compute_address.static-ip-address: Destroying … [id = node-cluster-243923 / europe-north1 / static-ip-address]

google_compute_address.static-ip-address: Destruction complete after 8s

To deploy the entire project, you can add it to the repository, and we will upload it to the virtual machine by copying the installation script to this virtual machine and then launching it:

Moving on to Kubernetes

In the minimal version, creating a cluster of three nodes looks like this:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

initial_node_count = 3

}

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform init

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply

The cluster was created in 2:15, and after I added europe-north1-a two additional zones europe-north1 -b , europe-north1-c and set the number of created instances in the zone to one, the cluster was created in 3:13 seconds , because for higher availability, the nodes were created in different data centers: europe-north1-a , europe-north1-b , europe-north1-c :

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

Now let's split our cluster into two: the control cluster with Kubernetes and the cluster for our PODs. All clusters will be distributed over three data centers. The cluster for our PODs can auto scale under load up to 2 on each zone (from three to six in total):

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat main.tf

provider "google" {

credentials = "$ {file (" ../ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

resource "google_container_node_pool" "node-ks-pool" {

name = "node-ks-pool"

cluster = "$ {google_container_cluster.node-ks.name}"

location = "europe-north1-a"

node_count = "1"

node_config {

machine_type = "n1-standard-1"

}

autoscaling {

min_node_count = 1

max_node_count = 2

}

}

Let's see what happened and look for the IP address of the cluster entry point:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters list

NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

node-ks europe-north1-a 1.12.8-gke.6 35.228.20.35 n1-standard-1 1.12.8-gke.6 6 RECONCILING

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters describe node-ks | grep '^ endpoint'

endpoint: 35.228.20.35

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ ping 35.228.20.35 -c 2

PING 35.228.20.35 (35.228.20.35) 56 (84) bytes of data.

64 bytes from 35.228.20.35: icmp_seq = 1 ttl = 59 time = 8.33 ms

64 bytes from 35.228.20.35: icmp_seq = 2 ttl = 59 time = 7.09 ms

– 35.228.20.35 ping statistics –

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min / avg / max / mdev = 7.094 / 7.714 / 8.334 / 0.620 ms

By adding variables, which I selected in a separate file just for clarity, which parameterize our config for different uses, we can use it, for example, to create test and production clusters. Variables can be added as var.name_value , and inserted into the text similarly to JS: $ {var.name_value} , as well as path.root .

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cat variables.tf

variable "region" {

default = "europe-north1"

}

variable "project_name" {

type = string

default = ""

}

variable "gce_key" {

default = "./kubernetes_key.json"

}

variable "node_count_zone" {

default = 1

}

They can be passed through the -var switch , for example: sudo ./terraform apply -var = "project_name = node-cluster-243923" .

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cp ../kubernetes_key.json.

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ sudo ../terraform apply -var = "project_name = node-cluster-243923"

Our project in the folder is not only a project, but also a module ready to use:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ cd ..

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

Or upload to the public repository:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git init

Initialized empty GIT repository in /home/essh/node-cluster/Kubernetes/.git/

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo "terraform.tfstate.backup" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ echo ".terraform /" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ rm -f kubernetes_key.json

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git remote add origin https://github.com/ESSch/terraform-google-kubernetes.git

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git add.

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git commit -m 'create a k8s Terraform module'

[master (root-commit) 4f73c64] create a Kubernetes Terraform module

3 files changed, 48 insertions (+)

create mode 100644 .gitignore

create mode 100644 main.tf

create mode 100644 variables.tf

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push -u origin master

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git tag -a v0.0.2 -m 'publish'

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ git push origin v0.0.2

After publishing in the module registry https://registry.terraform.io/, having met the requirements such as having a description, we can use it:

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

module "kubernetes" {

# source = "./Kubernetes"

source = "ESSch / kubernetes / google"

version = "0.0.2"

project_name = "node-cluster-243923"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

On the next creation of the cluster, I got the error ZONE_RESOURCE_POOL_EXHAUSTED "does not have enough resources available to fulfill the request. Try a different zone, or try again later" , indicating that there are no required servers in this region. For me this is not a problem and I do not need to edit the module's code, because I parameterized the module with the region, and if I just pass the region region = "europe-west2" to the module as a parameter , terraform after the update and initialization command ./terrafrom init and the application command ./terraform apply will transfer my cluster to the specified region. Let's improve our module a little by moving the provider from the Kubernetes child module to the main module (the main script is also a module). Having brought it to the main module, we will be able to use one more module, otherwise the provider in one module will conflict with the provider in another. Inheritance from main module to child modules and their transparency applies only to providers. The rest of the data for transferring from child to parent will have to use external variables, and from parent to child – to parameterize the parent, but this is later, when we will create another model. Also, moving the provider to the parent module will be useful when creating the next module that we will create, since it will create Kubernetes elements that do not depend on the provider, and thus we can untie the Google provider from our module and can be used with other providers supporting Kubernetes. Now we don't need to pass the project name in the variable – it is set in the provider. For the convenience of development, I will use the local connection of the module for now. I created a folder and file for a new module:

essh @ kubernetes-master: ~ / node-cluster $ ls nodejs /

main.tf

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

// module "kubernetes" {

// source = "ESSch / kubernetes / google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}

provider "google" {

credentials = "$ {file (" ./ kubernetes_key.json ")}"

project = "node-cluster-243923"

region = "europe-west2"

}

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}

module "nodejs" {

source = "./nodejs"

}

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform init

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

Now let's transfer data from the Kubernetes infrastructure module to the application module:

essh @ kubernetes-master: ~ / node-cluster $ cat Kubernetes / outputs.tf

output "endpoint" {

value = google_container_cluster.node-ks.endpoint

sensitive = true

}

output "name" {

value = google_container_cluster.node-ks.name

sensitive = true

}

output "cluster_ca_certificate" {

value = base64decode (google_container_cluster.node-ks.master_auth.0.cluster_ca_certificate)

}

essh @ kubernetes-master: ~ / node-cluster $ cat main.tf

// module "kubernetes" {

// source = "ESSch / kubernetes / google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}

provider "google" {

credentials = file ("./ kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}

module "nodejs" {

source = "./nodejs"

endpoint = module.Kubernetes.endpoint

cluster_ca_certificate = module.Kubernetes.cluster_ca_certificate

}

essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / variable.tf

variable "endpoint" {}

variable "cluster_ca_certificate" {}

To check the balancing of traffic from all nodes, start NGINX, replacing the standard page with the hostname. We'll replace it with a simple command call and resume the server. To start the server, let's look at its call in the Dockerfile: CMD ["Nginx", "-g", "daemon off;"] , which is equivalent to calling Nginx -g 'daemon off;' at the command line. As you can see, the Dockerfile does not use BASH as an environment for launching, but starts the server itself, which allows the shell to live in the event of a process crash, preventing the container from crashing and re-creating. But for our experiments, BASH is fine:

essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it Nginx: 1.17.0 which Nginx

/ usr / sbin / Nginx

sudo docker run -it –rm -p 8333: 80 Nginx: 1.17.0 / bin / bash -c "echo $ HOSTNAME> /usr/share/Nginx/html/index2.html && / usr / sbin / Nginx – g 'daemon off;' "

Now let's create our PODs in triplicate with NGINX, which Kubernetes will try to distribute to different servers by default. Let's also add the service as a balancer:

essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf

terraform {

required_version = "> = 0.12.0"

}

data "google_client_config" "default" {}

provider "kubernetes" {

host = var.endpoint

token = data.google_client_config.default.access_token

cluster_ca_certificate = var.cluster_ca_certificate

load_config_file = false

}

essh @ kubernetes-master: ~ / node-cluster $ cat nodejs / main.tf

resource "kubernetes_deployment" "nodejs" {

metadata {

name = "terraform-nodejs"

labels = {

app = "NodeJS"

}

}

spec {

replicas = 3

selector {

match_labels = {

app = "NodeJS"

}

}

template {

metadata {

labels = {

app = "NodeJS"

}

}

spec {

container {

image = "Nginx: 1.17.0"

name = "node-js"

command = ["/ bin / bash"]

args = ["-c", "echo $ HOSTNAME> /usr/share/Nginx/html/index.html && / usr / sbin / Nginx -g 'daemon off;'"]

}

}

}

}

}

resource "kubernetes_service" "nodejs" {

metadata {

name = "terraform-nodejs"

}

spec {

selector = {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

port {

port = 80

target_port = var.target_port

}

type = "LoadBalancer"

}

Let's check the work using kubectl, for this we transfer the secrets from gcloud to kubectl.

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

essh @ kubernetes-master: ~ / node-cluster $ gcloud container clusters get-credentials node-ks –region = europe-west2-a

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.

essh @ kubernetes-master: ~ / node-cluster $ kubectl get deployments -o wide

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR

terraform-nodejs 3 3 3 3 25m node-js Nginx: 1.17.0 app = NodeJS

essh @ kubernetes-master: ~ / node-cluster $ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6bd565dc6c-8768b 1/1 Running 0 4m45s 10.4.3.15 gke-node-ks-node-ks-pool-07115c5b-bw15 none>

terraform-nodejs-6bd565dc6c-hr5vg 1/1 Running 0 4m42s 10.4.5.13 gke-node-ks-node-ks-pool-27e2e52c-9q5b none>

terraform-nodejs-6bd565dc6c-mm7lh 1/1 Running 0 4m43s 10.4.2.6 gke-node-ks-default-pool-2dc50760-757p none>

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker ps | grep node-js_terraform

152e3c0ed940 719cd2e3ed04

"/ bin / bash -c 'ech …" 8 minutes ago Up 8 minutes

Kubernetes_node-js_terraform-nodejs-6bd565dc6c-8768b_default_7a87ae4a-9379-11e9-a78e-42010a9a0114_0

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker exec -it 152e3c0ed940 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-8768b

esschtolts @ gke-node-ks-node-ks-pool-27e2e52c-9q5b ~ $ docker exec -it c282135be446 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-default-pool-2dc50760-757p ~ $ docker exec -it 8d1cf9ef44e6 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.2.6

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.5.13

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.3.15

terraform-nodejs-6bd565dc6c-8768b

The Balancers load balance between PODs that are filtered by matching their selectors in the meta information and the Selector specified in the balancer description in the spec section . All nodes are connected to one common network, so you can connect to any node (I did this via SSH of the GCP WEB interface in the section with Compute Engine virtual machines). You can address both the IP address in the container or node host, and the host of the terraform-nodejs service in the terraform-NodeJS: 80 curl container , which is created by the internal DNS by the name of the service. You can view the external IP address EXTERNAL -IP both using kubectl at the service and using the web interface: GCP -> Kubernetes Engine -> Services:

essh @ kubernetes-master: ~ / node-cluster $ kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

kubernetes ClusterIP 10.7.240.1 none> 443 / TCP 6h58m

terraform-nodejs LoadBalancer 10.7.246.234 35.197.220.103 80: 32085 / TCP 5m27s

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-8768b

esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-hr5vg

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

Now let's move on to implementing the NodeJS server:

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform destroy

essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply

essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it –rm node: 12 which node

/ usr / local / bin / node

sudo docker run -it –rm -p 8222: 80 node: 12 / bin / bash -c 'cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git &&

/ usr / local / bin / node /usr/src/nodejs-hello-world/index.js'

firefox http: // localhost: 8222

Let's replace the block in our container with:

container {

image = "node: 12"

name = "node-js"

command = ["/ bin / bash"]

args = [

"-c",

"cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git && / usr / local / bin / node /usr/src/nodejs-hello-world/index.js "

]

}

If you comment out a Kubernetes module, and it remains in the cache, it remains to remove the excess from the cache:

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

Error: Provider configuration not present

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list

data.google_client_config.default

module.Kubernetes.google_container_cluster.node-ks

module.Kubernetes.google_container_node_pool.node-ks-pool

module.nodejs.kubernetes_deployment.nodejs

module.nodejs.kubernetes_service.nodejs

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs

Removed module.nodejs.kubernetes_deployment.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_service.nodejs

Removed module.nodejs.kubernetes_service.nodejs

Successfully removed 1 resource instance (s).

essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply

module.Kubernetes.google_container_cluster.node-ks: Refreshing state … [id = node-ks]

module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state … [id = europe-west2-a / node-ks / node-ks-pool]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Terraform Cluster Reliability and Automation

For a general overview of automation, see https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html # 0. We will dwell in more detail. Now if we run the ./terraform destroy and try to recreate the entire infrastructure from the beginning , we will get errors. Errors are received due to the fact that the order of creation of services is not specified and terraform, by default, sends requests to the API in 10 parallel threads, although this can be changed by specifying or removing the -parallelism = 1 switch during application or removal . As a result, Terraform tries to create Kubernetes services (Deployment and service) on servers (node-pull) that do not yet exist, the same situation when creating a service that requires proxying a Deployment that has not yet been created. By telling Terraform to request the API in a single thread ./terraform apply -parallelism = 1, we reduce possible provider-side restrictions on the frequency of API calls, but we do not solve the problem of lack of order in which entities are created. We will not comment out dependent blocks and gradually uncomment and run ./terraform apply , nor will we run our system piece by piece by specifying specific blocks ./terraform apply -target = module.nodejs.kubernetes_deployment.nodejs . We will indicate in the code the dependencies themselves on the initialization of the variable, the first of which is already defined as external var.endpoint , and the second we will create locally:

locals {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

Now we can add dependencies to the code depends_on = [var.endpoint] and depends_on = [kubernetes_deployment .nodejs] .


    Ваша оценка произведения:

Популярные книги за неделю