GCP – Networking

VPC Network Peering 

Cloud VPC Network Peering lets you privately connect two VPC networks, which can reduce latency, cost, and increase security.

Shared VPC 

Shared VPC lets you share subnets with other projects. You can then create resources (like VM instances) on those subnets

Packet Mirroring

Packet Mirroring aims to provide functionality in cloud, which can mirror a customers regular traffic and fulfill customers need for Advanced Security and Application Performance Monitoring.

GKE – ingress

Ingress Controller

1.

ovidiu@cloudshell:~(dev-1)$ kubectl create deployment nginx –image=nginx –replicas=2

ovidiu@cloudshell:~(dev-1)$ kubectl get deploy
NAME    READY   UP-TO-DATE  AVAILABLE  AGE
nginx        2/2             2                  2            73m

2.

ovidiu@cloudshell:~(dev-1)$  kubectl expose deployment nginx –port=80

 

ovidiu@cloudshell:~(dev-1)$ kubectl get svc
NAME                      TYPE             CLUSTER-IP       EXTERNAL-IP       PORT(S)             AGE
kubernetes         ClusterIP              10.3.240.1            <none>           443/TCP            76m
nginx                  NodePort            10.3.248.50          <none>         80:30495/TCP     75m

3.

ovidiu@cloudshell:~(dev-1)$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ovi-gke-ing created

ovidiu@cloudshell:~/exam/ex16 (asixdev-175816)$ more ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ovi-gke-ing
annotations:
kubernetes.io/ingress.class: gce
spec:
rules:
– http:
paths:
– path: /
backend:
serviceName: nginx
servicePort: 80

 

ovidiu@cloudshell:~(dev-1)$ kubectl get ing
NAME               HOSTS        ADDRESS          PORTS      AGE
ovi-gke-ing            *          34.120.206.125      80          22m

 

 

GKE

Container Analysis – 

Google Container Registry provides secure, private Docker image storage on Google Cloud Platform.

gcloud commands 

Build using Dockerfile
  1. Get your Cloud project ID by running the following command:

gcloud config get-value project

ovidiu@cloudshell:~$ gcloud config set project ovidev-yy58yy
Updated property [core/project].

ovidiu@cloudshell:~ (ovidev-yy58yy)$ gcloud config get-value project
Your active configuration is: [cloudshell-yy279]
ovidev-yy58yy

2. Run the following command from the directory containing Dockerfile, where project-id is your Cloud project ID:

ovidiu@cloudshell:~/quickstart-docker (ovidev-yy58yy)$ gcloud builds submit –tag gcr.io/ovidev-yy5816/ovi-image

You’ve just built a Docker image named ovi-image using a Dockerfile and pushed the image to Container Registry.

To disable gcloud command-line tool metrics collection, run the following command in your Cloud Shell session:

gcloud config set disable_usage_reporting true

  • Readiness Probe

Container Analysis

GKE Cluster IP Allocation

Use “kubectl explain <resource>” for a detailed description of that resource (e.g. kubectl explain pods).
**** Create GKE CLUSTER ****

ovidiu@cloudshell:~$ gcloud projects list

ovidiu@cloudshell:~$ gcloud config set project dev-1

commands used to create a two-node cluster for studying
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1

The default Kubernetes version is available using the following command

ovidiu@cloudshell:~ (dev-1)$ gcloud container get-server-config

gcloud container clusters create ovi-cluster --cluster-version=1.17.13-gke.2600 --image-type=ubuntu --num-nodes=2
or 

gcloud container clusters create ${CLUSTER_NAME} --preemptible --zone ${INSTANCE_ZONE} --scopes cloud-platform --num-nodes 3

ovidiu@cloudshell:~(dev-1)$ gcloud container clusters create ovi-cluster --preemptible --cluster-version=1.16.13-gke.401 --image-type=ubuntu --num-nodes=1

ovidiu@cloudshell:~(dev-1758yy)$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
gke-ovi-cluster-default-pool-095e58a2-5vvx   Ready    <none>   75s   v1.16.13-gke.401
gke-ovi-cluster-default-pool-095e58a2-t2f9   Ready    <none>   75s   v1.16.13-gke.401

If you don't specific cluster version - default version is used 

ovidiu@cloudshell:~(dev-1)$ gcloud container clusters create ovi2-cluster --preemptible --image-type=ubuntu --num-nodes=2
ovidiu@cloudshell:~ (dev-1)$ kubectl cluster-info
ovidiu@cloudshell:(dev-1)$ kubectl config current-context
gke_dev-1_us-central1-a_ovi2-cluster

ovidiu@cloudshell:(dev-1)$ kubectl config get-contexts

ovidiu@cloudshell:(dev-1)$ kubectl config use-context gke_asixdev-175816_us-central1-a_ovi-cluster

ovidiu@cloudshell:~(dev-1)$ kubectl config view
gcloud container clusters list

ovidiu@cloudshell:~ (dev-17)$ gcloud container clusters list
NAME          LOCATION       MASTER_VERSION   MASTER_IP       MACHINE_TYPE   NODE_VERSION     NUM_NODES    STATUS
ovi2-cluster  us-central1-a  1.16.13-gke.401  104.154.207.56  n1-standard-1  1.16.13-gke.401    2          RUNNING
kubectl config get-contexts # display list of contexts 
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
#deployment
ovidiu@cloudshell:~ (dev-1758yy)$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

ovidiu@cloudshell:~ (dev-1758yy)$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-86c57db685-xtz6m   1/1     Running   0          13s

kubectl create deployment redis-deploy --image redis --namespace=ovi-ns --dry-run=client -o yaml > ovi_deploy.yaml

ovidiu@cloudshell:~ (dev-153320)$ k set image deploy/nginxd nginx=nginx:1.16
deployment.apps/nginxd image updated

ovidiu@cloudshell:~/ (dev-153320)$ k describe deploy nginxd | grep -i image
Image: nginx:1.16

ovidiu@cloudshell:~/ (dev-153320)$ kubectl rollout undo deployment/nginxd
deployment.apps/nginxd rolled back

ovidiu@cloudshell:~/ (dev-153320)$ k describe deploy nginxd | grep -i image
Image: nginx


#pod 

ovidiu@cloudshell:~ (dev-1)$ kubectl run nginx --image=nginx --restart=Never
pod/nginx created
ovidiu@cloudshell:~(dev-1)$ kubectl run nginx --image=nginx restart=Never --port=80 --expose
service/nginx created
pod/nginx created

ovidiu@cloudshell:(dev-1)$ kubectl get svc nginx
NAME       TYPE           CLUSTER-IP    EXTERNAL-IP           PORT(S)      AGE
nginx     ClusterIP      10.3.250.183      <none>             80/TCP     3m12s

ovidiu@cloudshell:~(dev-1)$ kubectl get ep
NAME         ENDPOINTS          AGE
kubernetes  34.123.5.10:443      52m
nginx 3m21s

get logs 

ovidiu@cloudshell:~(dev-1)$ kubectl logs nginx

if the pod crashed and restarted, get the logs of the previous instance  

ovidiu@cloudshell:~(dev-1)$ kubectl logs nginx -p


#job
-----
ovidiu@cloudshell:~ (dev-1758yy)$ kubectl create job nginx --image=nginx  #job
job.batch/nginx created

#cron job
---------- 

ovidiu@cloudshell:~ (dev-1758yy)$ kubectl create cronjob nginx --image=nginx --schedule="* * * * *"
cronjob.batch/nginx created

#config map 
------------
ovidiu@cloudshell:~ (dev-1758yy)$ kubectl create configmap app-config --from-literal=keyovi=valueovi
configmap/app-config created
ovidiu@cloudshell:~ (dev-1)$ vi config.txt
ovidiu@cloudshell:~ (dev-1)$ more config.txt
ovi3=test3
ovi4=test4
ovidiu@cloudshell:~ (dev-1)$ kubectl create cm ovicm --from-file=config.txt
configmap/ovicm created

ovidiu@cloudshell:~(dev-175816)$ kubectl describe cm ovicm
Name: ovicm
Namespace: default
Labels: <none>
Annotations: <none>

Data
====
config.txt:
—-
ovi3=test3
ovi4=test4

Events: <none>


#create a namespace
-------------------


ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl create namespace ovi
namespace/ovi created

ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl create deployment nginx --image=nginx -n ovi
deployment.apps/nginx created
ovidiu@cloudshell:~/exam (dev-175816)$ kubectl get pods -n ovi
NAME                     READY   STATUS    RESTARTS   AGE
nginx-86c57db685-ffhfg   1/1     Running   0          69s
Create a service 

ovidiu@cloudshell:~/exam (dev-1758yy)$ vi nginx.yaml

ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl create -f nginx.yaml
service/nginx created
ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get service
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.3.240.1     <none>        443/TCP        36m
nginx        LoadBalancer   10.3.242.209   <pending>     80:30625/TCP   16s


ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get service
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.3.240.1     <none>          443/TCP        37m
nginx        LoadBalancer   10.3.242.209   35.202.30.161   80:30625/TCP   81s

#deployment 


ovidiu@cloudshell:~/exam (dev-175816)$ vi nginx-deployment.yaml

ovidiu@cloudshell:~/exam (dev-175816)$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-574b87c764-8sjr2 1/1 Running 0 12s
nginx-deployment-574b87c764-rl2v4 1/1 Running 0 12s
nginx-deployment-574b87c764-s67f5 1/1 Running 0 12s


ovidiu@cloudshell:~/exam (dev-1)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 116s

ovidiu@cloudshell:~/exam (dev-1)$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-574b87c764 3 3 3 10m
ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-574b87c764-8sjr2 1/1 Running 0 11m app=nginx,pod-template-hash=574b87c764
nginx-deployment-574b87c764-rl2v4 1/1 Running 0 11m app=nginx,pod-template-hash=574b87c764
nginx-deployment-574b87c764-s67f5 1/1 Running 0 11m app=nginx,pod-template-hash=574b87c764

ovidiu@cloudshell:~/exam (dev-1)$ kubectl describe deployments

from cli 

ovidiu@cloudshell:~ (dev-1)$ kubectl create deploy ovid --image=nginx --replicas=3 --port=80 -n ovi

ovidiu@cloudshell:~ (dev-1)$ kubectl expose deploy ovid --port=6363 --target-port=80 -n ovi

#cronjob 

ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get jobs --watch
NAME COMPLETIONS DURATION AGE
hello-1603371240 0/1 2m57s 2m57s
hello-1603371300 0/1 117s 117s
hello-1603371360 0/1 57s 57s
hello-1603371420 0/1 0s
hello-1603371420 0/1 0s 0s

ovidiu@cloudshell:~ (dev-1758yy)$ kubectl get cronjob hello
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 1 60s 82s

ovidiu@cloudshell:~ (dev-1758yy)$ kubectl describe cronjob hello


# replica set

ovidiu@cloudshell:~(dev-1)$ kubectl get rs
NAME    DESIRED CURRENT READY AGE
frontend 3        3       3    5h11m

ovidiu@cloudshell:~/(dev-1)$ kubectl autoscale rs frontend --max=10 --min=3 --cpu-percent=50
horizontalpodautoscaler.autoscaling/frontend autoscaled

#pv and pvc 

ovidiu@cloudshell:~/ (dev-1758yy)$ vi pv-3.yaml 

ovidiu@cloudshell:~/exam (dev-175816)$ kubectl apply -f pv-3.yaml
persistentvolume/task-pv-volume created

ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl get pv
NAME          CAPACITY  ACCESS  MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 50Mi       RWO Retain  Available manual 6s

ovidiu@cloudshell:~/exam (dev-1758yy)$ vi pvc.yaml
ovidiu@cloudshell:~/exam (dev-1758yy)$ kubectl apply -f pvc.yaml
persistentvolumeclaim/task-pv-claim created

ovidiu@cloudshell:~/exam (dev-175816)$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 50Mi RWO Retain Bound ovi/task-pv-claim manual 3m36s

ovidiu@cloudshell:~/exam (dev-175816)$ kubectl get pvc
NAME           STATUS        VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim  Bound   task-pv-volume          50Mi      RWO            manual       24s


# node afinity 

# labels

kubectl get node node01 –show-labels

kubectl label node node01 color=blue

No need to keep the cluster around when not studying, so:

gcloud container clusters delete ovi-cluster

 

ovidiu@cloudshell:~/exam (dev-1758yy)$ gcloud container clusters delete ovi-cluster
The following clusters will be deleted.
– [ovi-cluster] in [us-central1-a]

Do you want to continue (Y/n)? y

 

kubernetes Network policies

A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.

NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods

 

cAdvisor

cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers.

istio

Istio is a configurable, open source service-mesh layer that connects, monitors, and secures the containers in a Kubernetes cluster

Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality, which includes:

  • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
  • Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio is designed for extensibility and meets diverse deployment needs. It does this by intercepting and configuring mesh traffic as shown in the following diagram:

 Istio layers on top of Kubernetes, adding containers that are essentially invisible to the programmer and administrator. Called “sidecar” containers, these act as a “person in the middle,” directing traffic and monitoring the interactions between components. The two work in combination in three ways: configuration, monitoring, and management.

Istio features:

  • Traffic management
  • Security
  • telemetry
  • vizualization

Reference :

https://istio.io/latest/docs/concepts/what-is-istio/

https://www.ibm.com/cloud/learn/istio