CKAD

Core Concept  – 13 %

Configuration  – 18 %

Multi-Container Pods – 10 %

Pod Design – 20 %

State Persistent – 8 %

Observability – 18 %

Service and Networking – 13 %

 

By default RUN creates a deployment

$ kubectl run nginx –image=nginx                                                                                               —> deployment

C:\ovi\docker
λ kubectl run nginx –image=nginx –dry-run -o yaml

 

$ kubectl run nginx –image=nginx –restart=Never                                                                     —> pod

 

 

$ kubectl run busybox –image=busybox –restart=OnFailure                                                       —> job

$ kubectl run busybox –image=busybox –schedule=”* * * * *” –restart=OnFailure                    —-> cronJob

 

Secretes

You can generate secretes

$ kubectl create secret generic my-secret –from-literal=ovi=yyy -o yaml –dry-run > ovi-secret.yaml

and now you can create secret with bellow command:

 

$ kubectl create -f ovi-secret.yaml

 

Decode-it

 

*** Jobs and Cronjobs ***
————————–

You can use kubectl get to list and check the status of Jobs.

kubectl get jobs

You can use kubectl get to list and check the status of CronJobs.

kubectl get cronjobs

 

liveness pod 

 

apiVersion: v1
kind: Pod
metadata:
name: ovi-liveness-pod
spec:
containers:
– name: myapp-container
image: busybox
command: [‘sh’, ‘-c’, “echo Hello, OVI! && sleep 3600”]
livenessProbe:
exec:
command:
– echo
– testing
initialDelaySeconds: 5
periodSeconds: 5

 

DTR

Start DTR 

sudo docker start $(sudo docker ps -q -a -f name=dtr- )

 

collect the dtr-scanningstore container logs:

 

# docker container ls | grep dtr-scanningstore

Copy the container ID and paste it here:

# docker container logs CONTAINER_ID > scanningstore.txt

 

more dtrcheck.sh

# REPLICA_ID will be the replica ID for the current node.
# This command will start a RethinkDB client attached to the database on the current node.
# ENTRYPOINT [“node” “–no-deprecation” “/root/rethinkdb_eval.js”]
echo run the command below:
echo “r.db(‘rethinkdb’).table(‘table_status’)”
echo “.exit”

VER=v2.2.0

REPLICA_ID=$(sudo docker ps -lf name=’^/dtr-rethinkdb-.{12}$’ –format ‘{{.Names}}’ | cut -d- -f3)
sudo docker run -it –rm –net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dtrd.yyy.com/shared/rethinkcli:$VER $REPLICA_ID

# rethinkdb Commands
#
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘type_publishedAt’)
#{ created: 1 }
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘actor_publishedAt’)
#{ created: 1 }
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘type_actor_publishedAt’)
#{ created: 1 }
# r.db(‘rethinkdb’).table(‘table_status’)
#r.db(‘dtr2’).table(‘blob_repository’).reconfigure({shards: 1, replicas: 2})

 

kubernetes

λ kubectl cluster-info
Kubernetes master is running at https://ucp.yyy.com:6443
KubeDNS is running at https://ucp.yyy.com:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

C:\ovi\docker

Create a namespace called ‘ovi’

kubectl create namespace ovi

C:\ovi\docker
λ kubectl get namespaces | grep ovi
ovi       Active   117d
ovitest  Active    75d

C:\ovi\docker
λ kubectl create namespace ovi2
namespace “ovi2” created

C:\ovi\docker
λ kubectl delete namespace ovi2
namespace “ovi2” deleted

C:\ovi\docker
λ kubectl get namespaces ovi
NAME   STATUS    AGE
ovi         Active      118d

C:\ovi\docker
λ kubectl run nginx –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine –restart=Never -n ovitest
pod “nginx” created

 

Deploy first app

1.

C:\ovi\Kubernetes_labT
λ kubectl run –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine ovi-nginx -n ovi

2.

C:\ovi\Kubernetes_labT
λ kubectl expose deployment ovi-nginx –type=NodePort –port=80 -n ovi

deployment “ovi-nginx” created

3.

C:\ovi\Kubernetes_labT
λ kubectl describe service ovi-nginx -n ovi
Name: ovi-nginx
Namespace: ovi
Labels: run=ovi-nginx
Annotations: <none>
Selector: run=ovi-nginx
Type: NodePort
IP: 10.96.195.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 33368/TCP
Endpoints: 192.168.158.31:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

4.

C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME                                          READY STATUS      RESTARTS   AGE
ovi-nginx-97db6b48b-zt6tk           1/1     Running           0           15m
run-ovinginx-7559884b7b-7nps6 1/1      Running           0            2d

5.

λ kubectl describe pods ovi-nginx-97db6b48b-zt6tk -n ovi

 

6. test from browser

http://nodeIP:33368

 

Other commands

 

C:\ovi\Kubernetes_labT
λ kubectl logs ovi-tomcat8-5cd599dd45-bzgk7 -n ovi

….

06-Feb-2019 20:20:03.407 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“http-nio-8080”]
06-Feb-2019 20:20:03.413 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“ajp-nio-8009”]
06-Feb-2019 20:20:03.414 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 618 ms

C:\ovi\docker
λ kubectl get pods –namespace=ovitest
NAME   READY   STATUS   RESTARTS     AGE
nginx      1/1      Running         0              9m

C:\ovi\docker
λ kubectl get pods –all-namespaces | grep ovitest
ovitest nginx 1/1 Running 0 3m

kubectl describe pods nginx –namespace=ovitest

C:\ovi\docker
λ kubectl describe pods/nginx -n ovitest

 

C:\ovi\docker
λ kubectl get pod -o wide –namespace=ovitest
NAME READY    STATUS    RESTARTS   AGE                 IP                NODE
nginx     1/1       Running         0            59m           192.168.80.2    ovi.com

C:\ovi\docker
λ kubectl get namespaces –show-labels

kubectl run nginx –image=nginx –restart=Never -n ovi

λ kubectl get po –all-namespaces -o wide

Check ingress-controller

C:\ovi\docker
λ kubectl get pod -n ingress-nginx
NAME                                                              READY                  STATUS             RESTARTS      AGE
default-http-backend-67f6f4bdc-n6fdt            0/1                 ContainerCreating       0                3d
nginx-ingress-controller-584dc49b67-qjbpw    0/1                           Init:0/1              0                3d

 

$ kubectl get pod -n kube-system

C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME                                           READY      STATUS                  RESTARTS      AGE
ovi-nginx-97db6b48b-zt6tk              1/1        Running                       0                1h
ovi-tomcat-56d8469df-rgd62           0/1        ImagePullBackOff       0                 31m
ovi-tomcat8-5cd599dd45-bzgk7      1/1        Running                      0                26m
run-ovinginx-7559884b7b-7nps6      1/1       Running                     0                 2d

 

C:\ovi\Kubernetes_labT
λ kubectl run nginx –image=nginx –restart=Never -n mynamespace –dry-run -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
–  image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}

 

Edit ResourceQuota

C:\ovi\docker
λ kubectl describe quota -n ovi

Name: ovi-resourcequota
Namespace: ovi
Resource          Used      Hard
——– —- —- ————
limits.cpu          3500m     8
limits.memory    7Gi     16Gi
pods                   7          24
requests.cpu             3100m 8
requests.memory 6720Mi 16Gi

 

C:\ovi\docker
λ kubectl get resourcequota ovi-resourcequota –namespace=ovi –output=yaml

 

C:\ovi\docker
λ kubectl edit resourcequota ovi-resourcequota –namespace=ovi

 

you can log to container

kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} — ${CMD} ${ARG1} ${ARG2} … ${ARGN}

 

C:\ovi\docker
λ kubectl describe nodes 2>&1 | grep -i Disk

 

Reasons k8S Deployments Faill 

  1.  Wrong Container Image / Invalid Registry Permissions
  2. Application Crashing after Launch
  3. Missing ConfigMap or Secret
  4. Liveness/Readiness Probe Failure
  5. Exceeding CPU/Memory Limits
  6. Resource Quotas
  7. Insufficient Cluster Resources
  8. PersistentVolume fails to mount
  9. Validation Errors
  10. Container Image Not Updating

 

View the containers in the pod:

docker ps

Get the process ID for the container:

docker inspect --format '{{ .State.Pid }}' [container_id]

Use nsenter to run a command in the process’s network namespace:

nsenter -t [container_pid] -n ip addr

UCP

Pull/Tag/Push image to your private repository

C:\ovi\docker>docker pull python

C:\ovi\docker>docker tag python:latest dtr.yyy.com/ovi/python

 

C:\ovi\docker>docker login yyy.com
Username : oviovi
Password:
Login Succeeded

C:\ovi\docker>docker image push dtr.yyy.com/ovi/python
The push refers to a repository [dtr.yyy.com/ovi/python]
3523755f4e34: Pushed
9e17bfee4bf6: Pushed

cdcaace38a54: Pushed
6e1b48dc2ccc: Pushed
ff57bdb79ac8: Pushed
6e5e20cbf4a7: Pushed
86985c679800: Pushed
8fad67424c4e: Pushed
latest: digest: sha256:0caa6c6fd0ef24684fc399c2ec84b3279d25fdbfda6c18d1ef9ff94ca
cae6ea9 size: 2007

 

use $ eval (<env.sh) to point the docker client to UCP

C:\ovi\docker
λ docker version –format ‘{{.Server.Version}}’
‘ucp/3.0.5’

 

Find a version :

bash-4.2$ sudo docker run jenkins/jenkins:lts-alpine –version
2.164.2

$ sudo systemctl status docker

$ sudo systemctl stop docker

$ sudo systemctl start docker

 

bash-4.2$ rpm -qa | grep docker

docker-ee-17.06.2.ee.16-3.el7.x86_64

$ systemd-cgtop

bash-4.2$ sudo docker ps | grep unhealthy

bash-4.2$ sudo docker ps | grep unhealthy
ec90a15dad65 docker/ucp-swarm:3.0.5 “/bin/swarm manage…” 5 days ago Up 12 hours (unhealthy) 0.0.0.0:2376->2375/tcp ucp-swarm-manager
a89041bae460 docker/ucp-auth-store:3.0.5 “rethinkdb –bind …” 5 days ago Up 12 hours (unhealthy) 0.0.0.0:12383-12384->12383-12384/tcp ucp-auth-store
f92115290567 docker/ucp-etcd:3.0.5 “/bin/entrypoint.s…” 5 days ago Up 12 hours (unhealthy) 2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:1>2379/tcp ucp-kv

$ alias etcdctl=”docker exec -it ucp-kv /bin/etcdctl –ca-file /etc/docker/ssl/ca.pem –cert-file /etc/docker/ssl/cert.pem –key-file /etc/docker/ssl/key.pem –endpoints https://localhost:2379&#8243;

$ etcdctl ls /docker/nodes/

sudo docker exec -it ucp-kv etcdctl \
–endpoint https://127.0.0.1:2379 \
–ca-file /etc/docker/ssl/ca.pem \
–cert-file /etc/docker/ssl/cert.pem \
–key-file /etc/docker/ssl/key.pem \
cluster-health

$ systemd-cgtop

if you want to see the processes within a given cgroup, try this:
$ systemd-cgls /cgroup.name

For example, try this:

$ systemd-cgls /system.slice/NetworkManager.service

$ sudo docker volume ls

$ sudo docker losgs –tail=100 ec90a15dad65

Remove multiple containers

$ sudo docker rm -f 81413f9a81d7 a47319cf6975 bf6c729243a9

$ find $(sudo docker volume ls -qf name=ucp.*certs | xargs -n1 sudo docker volume inspect –format {{.Mountpoint}}) -name cert.pem -o -name ca.pem -exec echo {} \; -exec openssl x509 -text -in {} \; | egrep -i “volumes|Not Before|Not After” 2>&1

 

sudo docker swarm join –token SWMTKN-1-1dqy9xonxatras1cfhdaajygu7kol4vvvvvilnzj7ou4kau7mx8z5ps-5wbz3j3vn2jl4d0ei7trdyr7f 10.zzz.yyy.165:2377

 

docker stack

stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format, similar to a docker-compose.yml file, that defines one or more services.

Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.

Create a stack using the CLI

You can create a stack from a YAML file by executing:

$ docker-cloud stack create -f docker-cloud.yml

Command Description

docker stack               – deploy Deploy a new stack or update an existing stack
docker stack ls            – List stacks
docker stack ps           – list the tasks in the stack
docker stack rm          – remove one or more stacks
docker stack services  – List the services in the stack

 

Example:

docker stack deploy –compose-file=db.yml db_qa
docker stack deploy –compose-file=core.yml core_qa
docker stack deploy –compose-file=invokers.yml invokers_qa

 

Removed all qa stacks:
docker stack rm db_qa
docker stack rm core_qa
docker stack rm invokers_qa

 

UCP Backup 

run a backup for ucp lab . I will prepare for tomorow upgrade

 

 

bash-4.2$ sudo docker container run   –log-driver none –rm   –interactive   –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.1.1 backup   –id w2in52k65cq4tkql31a0l1dn1   –passphrase “secret2019” > /tmp/backup.tar

INFO[0000] Your engine version 18.09.1-rc1, build 73d4a3c (3.10.0-862.14.4.el7.x86_64) is compatible with UCP 3.1.1 (2f13bcf)

INFO[0000] Temporarily stopping local UCP containers to ensure a consistent backup

INFO[0057] Backing up internal KV store

INFO[0000] Beginning backup

INFO[0005] Backup completed successfully

INFO[0065] Resuming stopped UCP containers

 

bash-4.2$ gpg –decrypt /tmp/backup.tar | tar –list

gpg: AES encrypted session key

gpg: encrypted with 1 passphrase

gpg: NOTE: sender requested “for-your-eyes-only”

./ucp-auth-store.json

./ucp-kube-apiserver.json

./ucp-kube-controller-manager.json

./ucp-kubelet.json

./ucp-kube-proxy.json

./ucp-kube-scheduler.json

./ucp-controller.json

./ucp-swarm-manager.json

./ucp-kv.json

./ucp-proxy.json

./ucp-client-root-ca.json

./ucp-cluster-root-ca.json

./ucp-agent.ub233kxqab8926gj7xcxi9l40.3z3s97b66v3ibma3dydn4g527.json

docker load

bash-4.2$ sudo docker info –format ‘{{.Swarm.NodeAddr}}’
10.240.140.169

 

sudo docker image save docker/ucp-calico-cni:3.0.5 > ucp-calico-cni:3.0.5.tar

sudo docker image save docker/ucp-calico-node:3.0.5 > ucp-calico-node:3.0.5.tar

sudo docker load < ucp-calico-cni:3.0.5.tar

sudo docker load < ucp-calico-node:3.0.5.tar

sudo docker logs ucp-reconcile -f
sudo docker ps

sudo docker logs –tail=100 a8666fb6edd4

 

Uninstall UCP

docker container run –rm -it \

–name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp \

uninstall-ucp

restore 

docker container run –rm -i \

–name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp \

restore [command options] < backup.tar

 

sudo docker ps –format ‘{{printf “%-25s %-30s %-25s %-25s” .Names .Image .Command .Ports}}’

 

sudo docker config ls -q |wc -l

 

Generate a support dump for single server from cli

bash-4.2$ sudo docker container run –rm   –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   –log-driver none   docker/ucp:3.0.5   support >   docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz

 

echo yyyyyzzzcccc | base64 -d

sudo docker stats –all –no-stream

 

quick one-liner that displays stats for all of your running containers for old versions.

 

$ sudo docker ps -q | xargs  sudo docker stats –no-stream

 

bash-4.2$ sudo docker stats –all –no-stream

—-

de980818f40c        ucp-auth-store                                                                                                                161.43%

bash-4.2$ sudo docker stats de980818f40c

 

CONTAINER ID        NAME                      CPU %             MEM USAGE / LIMIT     MEM %               NET I/O                  BLOCK I/O           PIDS

de980818f40c        ucp-auth-store      320.66%             815.7MiB / 251.4GiB       0.32%               7.35GB / 4.14GB     0B / 614MB            133

 

bash-4.2$ sudo docker info | grep -i debug.*server
WARNING: bridge-nf-call-ip6tables is disabled
Debug Mode (server): true

 

bash-4.2$ sudo docker start ucp-reconcile

 

bash-4.2$ sudo docker logs ucp-reconcile

 

bash-4.2$ sudo docker images -f “dangling=true” -q
713a2e66c132
1fdec422b66f
ab6a73ab2085
d6502db93ce0

 

sudo docker rmi $(docker images -f “dangling=true” -q)

bash-4.2$ sudo docker rmi -f 5ec6bed65d0e
Deleted: sha256:5ec6bed65d0eb0666c34742ac04e92cbb671b2d5d0f3a7b9ef801dcd15cd8bf3
Deleted: sha256:51b753ad2fd7c2d3864abc69ad34a9b26eb76441851d626f05e4253f747f0744
Deleted: sha256:3af4af50ff8fdc147a764173adab1acb6acddf9a40c6c9a3a1063fa6717b23de
Deleted: sha256:80a1122d8837842ebacf9501320931323ec6d43dc6cdb9dcc4b7e657095cb1e6
Deleted: sha256:c4ea01c7a1b0d9b726cfe21ba6b0908dbd59a18df58867757af9ad6a540f89ad
Deleted: sha256:afc51cc67edef2a2973ded2a575f042aa6e3bbacfb41fbf780c97e0a3ae738f0
Deleted: sha256:ee73962b2dcd31630be26b67c5d3c5600609e5565321fec6ff39835d463f834d
Deleted: sha256:501b1c234375275a2c57011fc6ac595de0cb20196c93c701c99241e0e9653f99
Deleted: sha256:31c8576bd8863bbd5323c898db826ac953f0ac1899d11e075746c57c9d7e25ad
Deleted: sha256:a6d9c092438f0a4c5e574e38afd8b806be23d06708338784ee0ee47a0eef0729

 

 

Check if debug is Off

 

bash-4.2$ sudo docker info | grep -i debug.*server

Debug Mode (server): false

 

# docker load -i ucp_images_3.0.5.tar.gz

— Reference —-

calico node/pod unhealthy

https://stackoverflow.com/questions/51648230/docker-swarm-calico-node-pod-is-unhealthy

 

sudo docker ps | grep container_name

sudo docker inspect –format ‘{{ .State.Pid }}’ ecae75f8b0c5

 

ucp monitoring script

 

#!/bin/bash

#check node cluster for UCP 01 ####

#sudo docker node ls

echo “##################################”
echo “— ping TEST —–”
echo “##################################”
# Program name: pingall.sh
date
cat /users/ovi/docker/work/list_master.txt | while read output
do
ping -c 2 “$output” > /dev/null
if [ $? -eq 0 ]; then
echo “node $output is up”
else
echo “node $output is down”
fi
done
echo “#######################################”
echo ” —– Check space on nodes —- ”
echo “#######################################”

for i in $(cat ucp01_server.txt); do echo “host — $i” ;ssh -q -t $i df -h /app;echo “————————————–“; done

echo “#######################################”
echo ” —– Check uptime —- ”
echo “#######################################”

for i in $(cat ucp01_server.txt); do echo “host — $i” ;ssh -q -t $i uptime|awk -F, ‘{ print $1 }’ ;echo “————————————–“; done

 

push to slack

 

./ovi-slack.sh “ovi3.txt” https://hooks.slack.com/services/ZZZZEJH3B09/BQDTJzzzzzz/9xiCwqRUIHTyyyyyyyyzzzzzzzz

 

#!/bin/bash

cat ovi3.txt | while read LINE; do
(echo “$LINE” | grep -e “$3”) && curl -X POST –silent –data-urlencode \
“payload={\”text\”: \”$(echo $LINE | sed “s/\”/’/g”)\”}” “$2”;
done