Текст книги "IT Cloud"
Автор книги: Eugeny Shtoltc
сообщить о нарушении
Текущая страница: 9 (всего у книги 10 страниц)
The service unavailability error may also appear: Error: Get https: //35.197.228.3/API/v1 …: dial tcp 35.197.228.3:443: connect: connection refused , then the connection time has been exceeded, which is 6 minutes by default (3600 seconds), but here you can just try again.
Now let's move on to solving the problem of the reliability of the container, the main process of which we run in the command shell. The first thing we will do is separate the creation of the application from the launch of the container. To do this, you need to transfer the entire process of creating a service into the process of creating an image, which can be tested, and by which you can create a service container. So let's create an image:
essh @ kubernetes-master: ~ / node-cluster $ cat app / server.js
const http = require ('http');
const server = http.createServer (function (request, response) {
response.writeHead (200, {"Content-Type": "text / plain"});
response.end (`Nodejs_cluster is working! My host is $ {process.env.HOSTNAME}`);
});
server.listen (80);
essh @ kubernetes-master: ~ / node-cluster $ cat Dockerfile
FROM node: 12
WORKDIR / usr / src /
ADD ./app / usr / src /
RUN npm install
EXPOSE 3000
ENTRYPOINT ["node", "server.js"]
essh @ kubernetes-master: ~ / node-cluster $ sudo docker image build -t nodejs_cluster.
Sending build context to Docker daemon 257.4MB
Step 1/6: FROM node: 12
–> b074182f4154
Step 2/6: WORKDIR / usr / src /
–> Using cache
–> 06666b54afba
Step 3/6: ADD ./app / usr / src /
–> Using cache
–> 13fa01953b4a
Step 4/6: RUN npm install
–> Using cache
–> dd074632659c
Step 5/6: EXPOSE 3000
–> Using cache
–> ba3b7745b8e3
Step 6/6: ENTRYPOINT ["node", "server.js"]
–> Using cache
–> a957fa7a1efa
Successfully built a957fa7a1efa
Successfully tagged nodejs_cluster: latest
essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster
nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
Now let's put our image in the GCP registry, and not the Docker Hub, because we immediately get a private repository with which our services automatically have access:
essh @ kubernetes-master: ~ / node-cluster $ IMAGE_ID = "nodejs_cluster"
essh @ kubernetes-master: ~ / node-cluster $ sudo docker tag $ IMAGE_ID: latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest
essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster
nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
essh @ kubernetes-master: ~ / node-cluster $ gcloud auth configure-docker
gcloud credential helpers already registered correctly.
essh @ kubernetes-master: ~ / node-cluster $ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest
The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]
194f3d074f36: Pushed
b91e71cc9778: Pushed
640fdb25c9d7: Layer already exists
b0b300677afe: Layer already exists
5667af297e60: Layer already exists
84d0c4b192e8: Layer already exists
a637c551a0da: Layer already exists
2c8d31157b81: Layer already exists
7b76d801397d: Layer already exists
f32868cde90b: Layer already exists
0db06dff9d9a: Layer already exists
latest: digest: sha256: 912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629
essh @ kubernetes-master: ~ / node-cluster $ gcloud container images list
NAME
gcr.io/node-cluster-243923/nodejs_cluster
Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.
Now we can see it in the GCP admin panel: Container Registry -> Images. Let's replace the code of our container with the code with our image. If for production it is necessary to version the launched image in order to avoid their automatic update during system re-creation of PODs, for example, when transferring POD from one node to another when taking a machine with our node for maintenance. For development, it is better to use the latest tag , which will update the service when the image is updated. When you update the service, you need to recreate it, that is, delete and recreate it, since otherwise the terraform will simply update the parameters, and not recreate the container with the new image. Also, if we update the image and mark the service as modified with the command ./terraform taint $ {NAME_SERVICE} , our service will simply be updated, which can be seen with the command ./terraform plan . Therefore, to update, for now, you need to use the commands ./terraform destroy -target = $ {NAME_SERVICE} and ./terraform apply , and the name of the services can be found in ./terraform state list :
essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list
data.google_client_config.default
module.kubernetes.google_container_cluster.node-ks
module.kubernetes.google_container_node_pool.node-ks-pool
module.Nginx.kubernetes_deployment.nodejs
module.Nginx.kubernetes_service.nodejs
essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy -target = module.nodejs.kubernetes_deployment.nodejs
essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply
Now let's replace the code of our container:
container {
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
name = "node-js"
}
Let's check the result of balancing for different nodes (no line break at the end of the output):
essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh @ kubernetes-master: ~ / node-cluster $ curl http://35.246.85.138:80
Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh @ kubernetes-master: ~ / node-cluster $
We will automate the process of creating images, for this we will use the Google Cloud Build service (free for 5 users and traffic up to 50GB) to create a new image when creating a new version (tag) in the Cloud Source Repositories repository (free on Google Cloud Platform Free Tier). Google Cloud Platform -> Menu -> Tools -> Cloud Build -> Triggers -> Enable Cloud Build API -> Get Started -> Create a repository that will be available on Google Cloud Platform -> Menu -> Tools -> Source Code Repositories (Cloud Source Repositories):
essh @ kubernetes-master: ~ / node-cluster $ cd app /
essh @ kubernetes-master: ~ / node-cluster / app $ ls
server.js
essh @ kubernetes-master: ~ / node-cluster / app $ mv ./server.js ../
essh @ kubernetes-master: ~ / node-cluster / app $ gcloud source repos clone nodejs –project = node-cluster-243923
Cloning into '/ home / essh / node-cluster / app / nodejs' …
warning: You appear to have cloned an empty repository.
Project [node-cluster-243923] repository [nodejs] was cloned to [/ home / essh / node-cluster / app / nodejs].
essh @ kubernetes-master: ~ / node-cluster / app $ ls -a
… .. nodejs
essh @ kubernetes-master: ~ / node-cluster / app $ ls nodejs /
essh @ kubernetes-master: ~ / node-cluster / app $ ls -a nodejs /
… .. .git
essh @ kubernetes-master: ~ / node-cluster / app $ cd nodejs /
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ mv ../../server.js.
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'test server'
[master (root-commit) 46dd957] test server
1 file changed, 7 insertions (+)
create mode 100644 server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push -u origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 408 bytes | 408.00 KiB / s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
Now it's time to set up image creation when creating a new version of the product: go to GCP -> Cloud Build -> triggers -> Create trigger -> Google Cloud source code repository -> NodeJS. Trigger tag type so that the image is not created during normal commits. I will change the name of the image from gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA to gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA and the timeout to 60 seconds. Now I'll commit and add a tag:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'add Dockerfile'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git remote -v
origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (fetch)
origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (push)
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 380 bytes | 380.00 KiB / s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
46dd957..b86c01d master -> master
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.1 -m 'test to run'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.1
Counting objects: 1, done.
Writing objects: 100% (1/1), 161 bytes | 161.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] v0.0.1 -> v0.0.1
Now, if we press the start trigger button, we will see the image in the Container Registry with our tag:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud container images list
NAME
gcr.io/node-cluster-243923/nodejs
gcr.io/node-cluster-243923/nodejs_cluster
Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.
Now if we just add the changes and the tag, the image will be created automatically:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sed -i 's / HOSTNAME } / HOSTNAME } n /' server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'fix'
[master 230d67e] fix
1 file changed, 2 insertions (+), 1 deletion (-)
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 304 bytes | 304.00 KiB / s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
b86c01d..230d67e master -> master
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.2 -m 'fix'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.2
Counting objects: 1, done.
Writing objects: 100% (1/1), 158 bytes | 158.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] v0.0.2 -> v0.0.2
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sleep 60
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud builds list
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
2b024d7e-87a9-4d2a-980b-4e7c108c5fad 2019-06-22T17: 13: 14 + 00: 00 28S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.2 SUCCESS
6b4ae6ff-2f4a-481b-9f4e-219fafb5d572 2019-06-22T16: 57: 11 + 00: 00 29S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS
e50df082-31a4-463b-abb2-d0f72fbf62cb 2019-06-22T16: 56: 48 + 00: 00 29S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a latest -m 'fix'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin latest
Counting objects: 1, done.
Writing objects: 100% (1/1), 156 bytes | 156.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] latest -> latest
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cd ../ ..
Creating multiple environments with Terraform clusters
When trying to create several clusters from the same configuration, we will encounter duplicate identifiers that must be unique, so we isolate them from each other by creating and placing them in different projects. To manually create a project, go to GCP -> Products -> IAM and administration -> Resource management and create a NodeJS-prod project and switch to the project, wait for its activation. Let's look at the state of the current project:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = file ("./ kubernetes_key.json")
project = "node-cluster-243923"
region = "europe-west2"
}
module "kubernetes" {
source = "./Kubernetes"
}
data "google_client_config" "default" {}
module "Nginx" {
source = "./nodejs"
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
endpoint = module.kubernetes.endpoint
access_token = data.google_client_config.default.access_token
cluster_ca_certificate = module.kubernetes.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster $ gcloud config list project
[core]
project = node-cluster-243923
Your active configuration is: [default]
essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-243923
Updated property [core / project].
essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list
NAME ZONE INTERNAL_IP EXTERNAL_IP STATUS
gke-node-ks-default-pool-2e5073d4-csmg europe-north1-a 10.166.0.2 35.228.96.97 RUNNING
gke-node-ks-node-ks-pool-ccbaf5c6-4xgc europe-north1-a 10.166.15.233 35.228.82.222 RUNNING
gke-node-ks-default-pool-72a6d4a3-ldzg europe-north1-b 10.166.15.231 35.228.143.7 RUNNING
gke-node-ks-node-ks-pool-9ee6a401-ngfn europe-north1-b 10.166.15.234 35.228.129.224 RUNNING
gke-node-ks-default-pool-d370036c-kbg6 europe-north1-c 10.166.15.232 35.228.117.98 RUNNING
gke-node-ks-node-ks-pool-d7b09e63-q8r2 europe-north1-c 10.166.15.235 35.228.85.157 RUNNING
Switch gcloud and look at an empty project:
essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-prod-244519
Updated property [core / project].
essh @ kubernetes-master: ~ / node-cluster $ gcloud config list project
[core]
project = node-cluster-prod-244519
Your active configuration is: [default]
essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list
Listed 0 items.
The previous time, for node-cluster-243923, we created a service account, on behalf of which we created a cluster. To work with multiple Terraform accounts, we will create a service account for the new project through IAM and Administration -> Service Accounts. We will need to make two separate folders to run Terraform separately in order to separate SSH connections that have different authorization keys. If we put both providers with different keys, we will get a successful connection for the first project, later when Terraform proceeds to create a cluster for the next project, it will be rejected due to the invalid key from the first project to the second. There is another possibility – to activate the account as a company account (you need a website and email, and check them by Google), then it will be possible to create projects from the code without using the admin panel. After dev environment:
essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy
essh @ kubernetes-master: ~ / node-cluster $ mkdir dev
essh @ kubernetes-master: ~ / node-cluster $ cd dev /
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config set project node-cluster-243923
Updated property [core / project].
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config list project
[core]
project = node-cluster-243923
Your active configuration is: [default]
essh @ kubernetes-master: ~ / node-cluster / dev $ ../kubernetes_key.json ../main.tf.
essh @ kubernetes-master: ~ / node-cluster / dev $ cat main.tf
provider "google" {
alias = "dev"
credentials = file ("./ kubernetes_key.json")
project = "node-cluster-243923"
region = "europe-west2"
}
module "kubernetes_dev" {
source = "../Kubernetes"
node_pull = false
providers = {
google = google.dev
}
}
data "google_client_config" "default" {}
module "Nginx" {
source = "../nodejs"
providers = {
google = google.dev
}
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
endpoint = module.kubernetes_dev.endpoint
access_token = data.google_client_config.default.access_token
cluster_ca_certificate = module.kubernetes_dev.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform init
essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform apply
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-node-ks-default-pool-71afadb8-4t39 europe-north1-a n1-standard-1 10.166.0.60 35.228.96.97 RUNNING
gke-node-ks-node-ks-pool-134dada1-3cdf europe-north1-a n1-standard-1 10.166.0.61 35.228.117.98 RUNNING
gke-node-ks-node-ks-pool-134dada1-c476 europe-north1-a n1-standard-1 10.166.15.194 35.228.82.222 RUNNING
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud container clusters get-credentials node-ks
Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-ks.
essh @ kubernetes-master: ~ / node-cluster / dev $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
terraform-nodejs-6fd8498cb5-29dzx 1/1 Running 0 2m57s 10.12.3.2 gke-node-ks-node-ks-pool-134dada1-c476 none>
terraform-nodejs-6fd8498cb5-jcbj6 0/1 Pending 0 2m58s none> none> none>
terraform-nodejs-6fd8498cb5-lvfjf 1/1 Running 0 2m58s 10.12.1.3 gke-node-ks-node-ks-pool-134dada1-3cdf none>
As you can see, the PODs were distributed across the pool of nodes, while not getting to the node with Kubernetes due to lack of free space. It is important to note that the number of nodes in the pool was increased automatically, and only the specified limit did not allow creating a third node in the pool. If we set remove_default_node_pool to true, then we merge the Kubernetes PODs and our PODs. According to requests for resources, Kubernetes takes up a little more than one core, and our POD takes half, so the rest of the PODs were not created, but we saved on resources:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-node-ks-node-ks-pool-495b75fa-08q2 europe-north1-a n1-standard-1 10.166.0.57 35.228.117.98 RUNNING
gke-node-ks-node-ks-pool-495b75fa-wsf5 europe-north1-a n1-standard-1 10.166.0.59 35.228.96.97 RUNNING
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters get-credentials node-ks
Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-ks.
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
terraform-nodejs-6fd8498cb5-97svs 1/1 Running 0 14m 10.12.2.2 gke-node-ks-node-ks-pool-495b75fa-wsf5 none>
terraform-nodejs-6fd8498cb5-d9zkr 0/1 Pending 0 14m none> none> none>
terraform-nodejs-6fd8498cb5-phk8x 0/1 Pending 0 14m none> none> none>
After creating a service account, add the key and check it:
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud auth login essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud projects create node-cluster-prod3 Create in progress for [https: // cloudresourcemanager. googleapis.com/v1/projects/node-cluster-prod3]. Waiting for [operations / cp.7153345484959140898] to finish … done. https://medium.com/@pnatraj/how-to-run-gcloud-command-line-using-a-service-account-f39043d515b9
essh @ kubernetes-master: ~ / node-cluster $ gcloud auth application-default login
essh @ kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-prod-244519-6fd863dd4d38.json ./kubernetes_prod.json
essh @ kubernetes-master: ~ / node-cluster $ echo "kubernetes_prod.json" >> .gitignore
essh @ kubernetes-master: ~ / node-cluster $ gcloud iam service-accounts list
NAME EMAIL DISABLED
Compute Engine default service account [email protected] False
terraform-prod [email protected] False
essh @ kubernetes-master: ~ / node-cluster $ gcloud projects list | grep node-cluster
node-cluster-243923 node-cluster 26345118671
node-cluster-prod-244519 node-cluster-prod 1008874319751
Let's create a prod environment:
essh @ kubernetes-master: ~ / node-cluster $ mkdir prod
essh @ kubernetes-master: ~ / node-cluster $ cd prod /
essh @ kubernetes-master: ~ / node-cluster / prod $ cp ../main.tf ../kubernetes_prod_key.json.
essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config set project node-cluster-prod-244519
Updated property [core / project].
essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config list project
[core]
project = node-cluster-prod-244519
Your active configuration is: [default]
essh @ kubernetes-master: ~ / node-cluster / prod $ cat main.tf
provider "google" {
alias = "prod"
credentials = file ("./ kubernetes_prod_key.json")
project = "node-cluster-prod-244519"
region = "us-west2"
}
module "kubernetes_prod" {
source = "../Kubernetes"
providers = {
google = google.prod
}
}
data "google_client_config" "default" {}
module "Nginx" {
source = "../nodejs"
providers = {
google = google.prod
}
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
endpoint = module.kubernetes_prod.endpoint
access_token = data.google_client_config.default.access_token
cluster_ca_certificate = module.kubernetes_prod.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform init
essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform apply