tensorflow

ash-4.2$ sudo docker pull tensorflow/tensorflow:nightly-py3-jupyter
nightly-py3-jupyter: Pulling from tensorflow/tensorflow
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
f0f373a09782: Extracting [=========================> ] 88.57MB/171.8MB
d7522288648d: Download complete
5e538e77e6eb: Download complete
841db8c4b878: Downloading [=======> ] 43.67MB/289.2MB
26.3MB/289.2MBwnload complete
93080a73b48a: Download complete
424b59aaa775: Download complete
d0353440d65a: Download complete
6144de382f6d: Download complete
729600b8b14c: Download complete
261ae3aa721e: Download complete
500c259546bd: Download complete
f0da77afb904: Download complete
29f0add3b34f: Download complete
7fcf09da4efd: Download complete
e18ee217efa4: Download complete
d2e0a117e7ee: Download complete

….

bash-4.2$ sudo docker pull tensorflow/tensorflow:nightly-py3-jupyter
nightly-py3-jupyter: Pulling from tensorflow/tensorflow
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
f0f373a09782: Pull complete
d7522288648d: Pull complete
5e538e77e6eb: Pull complete
841db8c4b878: Pull complete
a5d3dc5c0fca: Pull complete
93080a73b48a: Pull complete
424b59aaa775: Pull complete
d0353440d65a: Pull complete
6144de382f6d: Pull complete
729600b8b14c: Pull complete
261ae3aa721e: Pull complete
500c259546bd: Pull complete
f0da77afb904: Pull complete
29f0add3b34f: Pull complete
7fcf09da4efd: Pull complete
e18ee217efa4: Pull complete
d2e0a117e7ee: Pull complete
Digest: sha256:151cb2361ba2dd49803430cc22f46f659ad39bc5dc84ae36170212a9500e6780
Status: Downloaded newer image for tensorflow/tensorflow:nightly-py3-jupyter
docker.io/tensorflow/tensorflow:nightly-py3-jupyter

bash-4.2$ sudo docker tag tensorflow/tensorflow:nightly-py3-jupyter ovi.com/tensorflow:nightly-py3-jupyter

bash-4.2$ sudo docker push ovi.com/tensorflow:nightly-py3-jupyter

The push refers to repository [ovi.com/tensorflow]
fafed7e084c2: Pushed
871d4a411a5f: Pushed
7cfe37b7c673: Pushed
af6c44609326: Pushed
82f6a46e7b5c: Pushed
a23b6f909550: Pushed
6da19faece2a: Pushed
42f607d58fee: Pushed
d0e89b0b2f9b: Pushed
8e74e0b521d3: Pushed
d3a211c15a8c: Pushed
ec00cb7f1f0c: Pushed
d5dfb4826643: Pushed
0b6a0645d4c9: Pushed
1bc4bc65b435: Pushed
7fc7e05f494e: Pushed
03f407d8a81a: Pushed
b079b3fa8d1b: Pushed
a31dbd3063d7: Pushed
c56e09e1bd18: Pushed
543791078bdb: Pushed
nightly-py3-jupyter: digest: sha256:151cb2361ba2dd49803430cc22f46f659ad39bc5dc84ae36170212a9500e6780 size: 4706

C:\ovi\docker
λ docker run -it –rm   ovi.com/tensorflow:nightly-py3-jupyter bash

________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/

WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.

To avoid this, run the container by specifying your user’s userid:

$ docker run -u $(id -u):$(id -g) args…

root@448e4b7c17ec:/tf# ls -l
total 4
drwxrwxrwx 1 root root 4096 Aug 6 14:39 tensorflow-tutorials
root@448e4b7c17ec:/tf#

root@448e4b7c17ec:/tf/tensorflow-tutorials# ls -l
total 64
-rw-rw-r– 1 root root 69 Aug 6 12:35 README.md
-rw-r–r– 1 root root 31080 Aug 6 14:39 basic_classification.ipynb
-rw-r–r– 1 root root 26683 Aug 6 14:39 basic_text_classification.ipynb
root@448e4b7c17ec:/tf/tensorflow-tutorials# more README.md
Want more tutorials like these?

Check out tensorflow.org/tutorials!

root@bc1c62c1274b:/tf# python –version
Python 3.6.8

bash-4.2$ sudo docker pull tensorflow/tensorflow:nightly
nightly: Pulling from tensorflow/tensorflow
7413c47ba209: Already exists
0fe7e7cbb2e8: Already exists
1d425c982345: Already exists
344da5c95cec: Already exists
e6ea787a31d6: Pull complete
f1e1aa574d76: Pull complete
c21ee1afa4b3: Pull complete
36072ce77e68: Pull complete
d1b7231452fa: Pull complete
dd94042e9d39: Pull complete
Digest: sha256:4be7452f34ac3ac0006d4ddce8b7684b8d2149c8b2531ab10d04b4d43d533e32
Status: Downloaded newer image for tensorflow/tensorflow:nightly

docker.io/tensorflow/tensorflow:nightly
bash-4.2$ sudo docker tag tensorflow/tensorflow:nightly  ovi.com/shared/tensorflow:nightly

bash-4.2$ sudo docker push ovi.com/tensorflow:nightly
The push refers to repository [ovi.com/tensorflow]
57d2de5cd2f5: Pushed
c2d60a8e5af8: Pushed
5a988b036c43: Pushing [===> ] 43.38MB/687.2MB
e754593b8b9a: Pushed
3555e11170cf: Pushed
a7e38894aa41: Pushing [===> ] 26.8MB/366.3MB
b079b3fa8d1b: Layer already exists
a31dbd3063d7: Layer already exists
c56e09e1bd18: Layer already exists
543791078bdb: Layer already exists

CKAD

Core Concept  – 13 %

Configuration  – 18 %

Multi-Container Pods – 10 %

Pod Design – 20 %

State Persistent – 8 %

Observability – 18 %

Service and Networking – 13 %

 

By default RUN creates a deployment

$ kubectl run nginx –image=nginx                                                                                               —> deployment

C:\ovi\docker
λ kubectl run nginx –image=nginx –dry-run -o yaml

 

$ kubectl run nginx –image=nginx –restart=Never                                                                     —> pod

 

 

$ kubectl run busybox –image=busybox –restart=OnFailure                                                       —> job

$ kubectl run busybox –image=busybox –schedule=”* * * * *” –restart=OnFailure                    —-> cronJob

 

Secretes

You can generate secretes

$ kubectl create secret generic my-secret –from-literal=ovi=yyy -o yaml –dry-run > ovi-secret.yaml

and now you can create secret with bellow command:

 

$ kubectl create -f ovi-secret.yaml

 

Decode-it

 

*** Jobs and Cronjobs ***
————————–

You can use kubectl get to list and check the status of Jobs.

kubectl get jobs

You can use kubectl get to list and check the status of CronJobs.

kubectl get cronjobs

 

liveness pod 

 

apiVersion: v1
kind: Pod
metadata:
name: ovi-liveness-pod
spec:
containers:
– name: myapp-container
image: busybox
command: [‘sh’, ‘-c’, “echo Hello, OVI! && sleep 3600”]
livenessProbe:
exec:
command:
– echo
– testing
initialDelaySeconds: 5
periodSeconds: 5

 

DTR

Start DTR 

sudo docker start $(sudo docker ps -q -a -f name=dtr- )

 

collect the dtr-scanningstore container logs:

 

# docker container ls | grep dtr-scanningstore

Copy the container ID and paste it here:

# docker container logs CONTAINER_ID > scanningstore.txt

 

more dtrcheck.sh

# REPLICA_ID will be the replica ID for the current node.
# This command will start a RethinkDB client attached to the database on the current node.
# ENTRYPOINT [“node” “–no-deprecation” “/root/rethinkdb_eval.js”]
echo run the command below:
echo “r.db(‘rethinkdb’).table(‘table_status’)”
echo “.exit”

VER=v2.2.0

REPLICA_ID=$(sudo docker ps -lf name=’^/dtr-rethinkdb-.{12}$’ –format ‘{{.Names}}’ | cut -d- -f3)
sudo docker run -it –rm –net dtr-ol -v dtr-ca-$REPLICA_ID:/ca dtrd.yyy.com/shared/rethinkcli:$VER $REPLICA_ID

# rethinkdb Commands
#
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘type_publishedAt’)
#{ created: 1 }
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘actor_publishedAt’)
#{ created: 1 }
#> r.db(‘dtr2’).table(‘events’).indexCreate(‘type_actor_publishedAt’)
#{ created: 1 }
# r.db(‘rethinkdb’).table(‘table_status’)
#r.db(‘dtr2’).table(‘blob_repository’).reconfigure({shards: 1, replicas: 2})

 

kubernetes

λ kubectl cluster-info
Kubernetes master is running at https://ucp.yyy.com:6443
KubeDNS is running at https://ucp.yyy.com:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

C:\ovi\docker

Create a namespace called ‘ovi’

kubectl create namespace ovi

C:\ovi\docker
λ kubectl get namespaces | grep ovi
ovi       Active   117d
ovitest  Active    75d

C:\ovi\docker
λ kubectl create namespace ovi2
namespace “ovi2” created

C:\ovi\docker
λ kubectl delete namespace ovi2
namespace “ovi2” deleted

C:\ovi\docker
λ kubectl get namespaces ovi
NAME   STATUS    AGE
ovi         Active      118d

C:\ovi\docker
λ kubectl run nginx –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine –restart=Never -n ovitest
pod “nginx” created

 

Deploy first app

1.

C:\ovi\Kubernetes_labT
λ kubectl run –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine ovi-nginx -n ovi

2.

C:\ovi\Kubernetes_labT
λ kubectl expose deployment ovi-nginx –type=NodePort –port=80 -n ovi

deployment “ovi-nginx” created

3.

C:\ovi\Kubernetes_labT
λ kubectl describe service ovi-nginx -n ovi
Name: ovi-nginx
Namespace: ovi
Labels: run=ovi-nginx
Annotations: <none>
Selector: run=ovi-nginx
Type: NodePort
IP: 10.96.195.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 33368/TCP
Endpoints: 192.168.158.31:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

4.

C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME                                          READY STATUS      RESTARTS   AGE
ovi-nginx-97db6b48b-zt6tk           1/1     Running           0           15m
run-ovinginx-7559884b7b-7nps6 1/1      Running           0            2d

5.

λ kubectl describe pods ovi-nginx-97db6b48b-zt6tk -n ovi

 

6. test from browser

http://nodeIP:33368

 

Other commands

 

C:\ovi\Kubernetes_labT
λ kubectl logs ovi-tomcat8-5cd599dd45-bzgk7 -n ovi

….

06-Feb-2019 20:20:03.407 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“http-nio-8080”]
06-Feb-2019 20:20:03.413 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“ajp-nio-8009”]
06-Feb-2019 20:20:03.414 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 618 ms

C:\ovi\docker
λ kubectl get pods –namespace=ovitest
NAME   READY   STATUS   RESTARTS     AGE
nginx      1/1      Running         0              9m

C:\ovi\docker
λ kubectl get pods –all-namespaces | grep ovitest
ovitest nginx 1/1 Running 0 3m

kubectl describe pods nginx –namespace=ovitest

C:\ovi\docker
λ kubectl describe pods/nginx -n ovitest

 

C:\ovi\docker
λ kubectl get pod -o wide –namespace=ovitest
NAME READY    STATUS    RESTARTS   AGE                 IP                NODE
nginx     1/1       Running         0            59m           192.168.80.2    ovi.com

C:\ovi\docker
λ kubectl get namespaces –show-labels

kubectl run nginx –image=nginx –restart=Never -n ovi

λ kubectl get po –all-namespaces -o wide

Check ingress-controller

C:\ovi\docker
λ kubectl get pod -n ingress-nginx
NAME                                                              READY                  STATUS             RESTARTS      AGE
default-http-backend-67f6f4bdc-n6fdt            0/1                 ContainerCreating       0                3d
nginx-ingress-controller-584dc49b67-qjbpw    0/1                           Init:0/1              0                3d

 

$ kubectl get pod -n kube-system

C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME                                           READY      STATUS                  RESTARTS      AGE
ovi-nginx-97db6b48b-zt6tk              1/1        Running                       0                1h
ovi-tomcat-56d8469df-rgd62           0/1        ImagePullBackOff       0                 31m
ovi-tomcat8-5cd599dd45-bzgk7      1/1        Running                      0                26m
run-ovinginx-7559884b7b-7nps6      1/1       Running                     0                 2d

 

C:\ovi\Kubernetes_labT
λ kubectl run nginx –image=nginx –restart=Never -n mynamespace –dry-run -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
–  image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}

 

Edit ResourceQuota

C:\ovi\docker
λ kubectl describe quota -n ovi

Name: ovi-resourcequota
Namespace: ovi
Resource          Used      Hard
——– —- —- ————
limits.cpu          3500m     8
limits.memory    7Gi     16Gi
pods                   7          24
requests.cpu             3100m 8
requests.memory 6720Mi 16Gi

 

C:\ovi\docker
λ kubectl get resourcequota ovi-resourcequota –namespace=ovi –output=yaml

 

C:\ovi\docker
λ kubectl edit resourcequota ovi-resourcequota –namespace=ovi

 

you can log to container

kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} — ${CMD} ${ARG1} ${ARG2} … ${ARGN}

 

C:\ovi\docker
λ kubectl describe nodes 2>&1 | grep -i Disk

 

Reasons k8S Deployments Faill 

  1.  Wrong Container Image / Invalid Registry Permissions
  2. Application Crashing after Launch
  3. Missing ConfigMap or Secret
  4. Liveness/Readiness Probe Failure
  5. Exceeding CPU/Memory Limits
  6. Resource Quotas
  7. Insufficient Cluster Resources
  8. PersistentVolume fails to mount
  9. Validation Errors
  10. Container Image Not Updating

 

View the containers in the pod:

docker ps

Get the process ID for the container:

docker inspect --format '{{ .State.Pid }}' [container_id]

Use nsenter to run a command in the process’s network namespace:

nsenter -t [container_pid] -n ip addr

UCP

Pull/Tag/Push image to your private repository

C:\ovi\docker>docker pull python

C:\ovi\docker>docker tag python:latest dtr.yyy.com/ovi/python

 

C:\ovi\docker>docker login yyy.com
Username : oviovi
Password:
Login Succeeded

C:\ovi\docker>docker image push dtr.yyy.com/ovi/python
The push refers to a repository [dtr.yyy.com/ovi/python]
3523755f4e34: Pushed
9e17bfee4bf6: Pushed

cdcaace38a54: Pushed
6e1b48dc2ccc: Pushed
ff57bdb79ac8: Pushed
6e5e20cbf4a7: Pushed
86985c679800: Pushed
8fad67424c4e: Pushed
latest: digest: sha256:0caa6c6fd0ef24684fc399c2ec84b3279d25fdbfda6c18d1ef9ff94ca
cae6ea9 size: 2007

 

use $ eval (<env.sh) to point the docker client to UCP

C:\ovi\docker
λ docker version –format ‘{{.Server.Version}}’
‘ucp/3.0.5’

 

Find a version :

bash-4.2$ sudo docker run jenkins/jenkins:lts-alpine –version
2.164.2

$ sudo systemctl status docker

$ sudo systemctl stop docker

$ sudo systemctl start docker

 

bash-4.2$ rpm -qa | grep docker

docker-ee-17.06.2.ee.16-3.el7.x86_64

$ systemd-cgtop

bash-4.2$ sudo docker ps | grep unhealthy

bash-4.2$ sudo docker ps | grep unhealthy
ec90a15dad65 docker/ucp-swarm:3.0.5 “/bin/swarm manage…” 5 days ago Up 12 hours (unhealthy) 0.0.0.0:2376->2375/tcp ucp-swarm-manager
a89041bae460 docker/ucp-auth-store:3.0.5 “rethinkdb –bind …” 5 days ago Up 12 hours (unhealthy) 0.0.0.0:12383-12384->12383-12384/tcp ucp-auth-store
f92115290567 docker/ucp-etcd:3.0.5 “/bin/entrypoint.s…” 5 days ago Up 12 hours (unhealthy) 2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:1>2379/tcp ucp-kv

$ alias etcdctl=”docker exec -it ucp-kv /bin/etcdctl –ca-file /etc/docker/ssl/ca.pem –cert-file /etc/docker/ssl/cert.pem –key-file /etc/docker/ssl/key.pem –endpoints https://localhost:2379&#8243;

$ etcdctl ls /docker/nodes/

sudo docker exec -it ucp-kv etcdctl \
–endpoint https://127.0.0.1:2379 \
–ca-file /etc/docker/ssl/ca.pem \
–cert-file /etc/docker/ssl/cert.pem \
–key-file /etc/docker/ssl/key.pem \
cluster-health

$ systemd-cgtop

if you want to see the processes within a given cgroup, try this:
$ systemd-cgls /cgroup.name

For example, try this:

$ systemd-cgls /system.slice/NetworkManager.service

$ sudo docker volume ls

$ sudo docker losgs –tail=100 ec90a15dad65

Remove multiple containers

$ sudo docker rm -f 81413f9a81d7 a47319cf6975 bf6c729243a9

$ find $(sudo docker volume ls -qf name=ucp.*certs | xargs -n1 sudo docker volume inspect –format {{.Mountpoint}}) -name cert.pem -o -name ca.pem -exec echo {} \; -exec openssl x509 -text -in {} \; | egrep -i “volumes|Not Before|Not After” 2>&1

 

sudo docker swarm join –token SWMTKN-1-1dqy9xonxatras1cfhdaajygu7kol4vvvvvilnzj7ou4kau7mx8z5ps-5wbz3j3vn2jl4d0ei7trdyr7f 10.zzz.yyy.165:2377

 

docker stack

stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format, similar to a docker-compose.yml file, that defines one or more services.

Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately.

Create a stack using the CLI

You can create a stack from a YAML file by executing:

$ docker-cloud stack create -f docker-cloud.yml

Command Description

docker stack               – deploy Deploy a new stack or update an existing stack
docker stack ls            – List stacks
docker stack ps           – list the tasks in the stack
docker stack rm          – remove one or more stacks
docker stack services  – List the services in the stack

 

Example:

docker stack deploy –compose-file=db.yml db_qa
docker stack deploy –compose-file=core.yml core_qa
docker stack deploy –compose-file=invokers.yml invokers_qa

 

Removed all qa stacks:
docker stack rm db_qa
docker stack rm core_qa
docker stack rm invokers_qa

 

UCP Backup 

run a backup for ucp lab . I will prepare for tomorow upgrade

 

 

bash-4.2$ sudo docker container run   –log-driver none –rm   –interactive   –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.1.1 backup   –id w2in52k65cq4tkql31a0l1dn1   –passphrase “secret2019” > /tmp/backup.tar

INFO[0000] Your engine version 18.09.1-rc1, build 73d4a3c (3.10.0-862.14.4.el7.x86_64) is compatible with UCP 3.1.1 (2f13bcf)

INFO[0000] Temporarily stopping local UCP containers to ensure a consistent backup

INFO[0057] Backing up internal KV store

INFO[0000] Beginning backup

INFO[0005] Backup completed successfully

INFO[0065] Resuming stopped UCP containers

 

bash-4.2$ gpg –decrypt /tmp/backup.tar | tar –list

gpg: AES encrypted session key

gpg: encrypted with 1 passphrase

gpg: NOTE: sender requested “for-your-eyes-only”

./ucp-auth-store.json

./ucp-kube-apiserver.json

./ucp-kube-controller-manager.json

./ucp-kubelet.json

./ucp-kube-proxy.json

./ucp-kube-scheduler.json

./ucp-controller.json

./ucp-swarm-manager.json

./ucp-kv.json

./ucp-proxy.json

./ucp-client-root-ca.json

./ucp-cluster-root-ca.json

./ucp-agent.ub233kxqab8926gj7xcxi9l40.3z3s97b66v3ibma3dydn4g527.json

docker load

bash-4.2$ sudo docker info –format ‘{{.Swarm.NodeAddr}}’
10.240.140.169

 

sudo docker image save docker/ucp-calico-cni:3.0.5 > ucp-calico-cni:3.0.5.tar

sudo docker image save docker/ucp-calico-node:3.0.5 > ucp-calico-node:3.0.5.tar

sudo docker load < ucp-calico-cni:3.0.5.tar

sudo docker load < ucp-calico-node:3.0.5.tar

sudo docker logs ucp-reconcile -f
sudo docker ps

sudo docker logs –tail=100 a8666fb6edd4

 

Uninstall UCP

docker container run –rm -it \

–name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp \

uninstall-ucp

restore 

docker container run –rm -i \

–name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp \

restore [command options] < backup.tar

 

sudo docker ps –format ‘{{printf “%-25s %-30s %-25s %-25s” .Names .Image .Command .Ports}}’

 

sudo docker config ls -q |wc -l

 

Generate a support dump for single server from cli

bash-4.2$ sudo docker container run –rm   –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   –log-driver none   docker/ucp:3.0.5   support >   docker-support-${HOSTNAME}-$(date +%Y%m%d-%H_%M_%S).tgz

 

echo yyyyyzzzcccc | base64 -d

sudo docker stats –all –no-stream

 

quick one-liner that displays stats for all of your running containers for old versions.

 

$ sudo docker ps -q | xargs  sudo docker stats –no-stream

 

bash-4.2$ sudo docker stats –all –no-stream

—-

de980818f40c        ucp-auth-store                                                                                                                161.43%

bash-4.2$ sudo docker stats de980818f40c

 

CONTAINER ID        NAME                      CPU %             MEM USAGE / LIMIT     MEM %               NET I/O                  BLOCK I/O           PIDS

de980818f40c        ucp-auth-store      320.66%             815.7MiB / 251.4GiB       0.32%               7.35GB / 4.14GB     0B / 614MB            133

 

bash-4.2$ sudo docker info | grep -i debug.*server
WARNING: bridge-nf-call-ip6tables is disabled
Debug Mode (server): true

 

bash-4.2$ sudo docker start ucp-reconcile

 

bash-4.2$ sudo docker logs ucp-reconcile

 

bash-4.2$ sudo docker images -f “dangling=true” -q
713a2e66c132
1fdec422b66f
ab6a73ab2085
d6502db93ce0

 

sudo docker rmi $(docker images -f “dangling=true” -q)

bash-4.2$ sudo docker rmi -f 5ec6bed65d0e
Deleted: sha256:5ec6bed65d0eb0666c34742ac04e92cbb671b2d5d0f3a7b9ef801dcd15cd8bf3
Deleted: sha256:51b753ad2fd7c2d3864abc69ad34a9b26eb76441851d626f05e4253f747f0744
Deleted: sha256:3af4af50ff8fdc147a764173adab1acb6acddf9a40c6c9a3a1063fa6717b23de
Deleted: sha256:80a1122d8837842ebacf9501320931323ec6d43dc6cdb9dcc4b7e657095cb1e6
Deleted: sha256:c4ea01c7a1b0d9b726cfe21ba6b0908dbd59a18df58867757af9ad6a540f89ad
Deleted: sha256:afc51cc67edef2a2973ded2a575f042aa6e3bbacfb41fbf780c97e0a3ae738f0
Deleted: sha256:ee73962b2dcd31630be26b67c5d3c5600609e5565321fec6ff39835d463f834d
Deleted: sha256:501b1c234375275a2c57011fc6ac595de0cb20196c93c701c99241e0e9653f99
Deleted: sha256:31c8576bd8863bbd5323c898db826ac953f0ac1899d11e075746c57c9d7e25ad
Deleted: sha256:a6d9c092438f0a4c5e574e38afd8b806be23d06708338784ee0ee47a0eef0729

 

 

Check if debug is Off

 

bash-4.2$ sudo docker info | grep -i debug.*server

Debug Mode (server): false

 

# docker load -i ucp_images_3.0.5.tar.gz

— Reference —-

calico node/pod unhealthy

https://stackoverflow.com/questions/51648230/docker-swarm-calico-node-pod-is-unhealthy

 

sudo docker ps | grep container_name

sudo docker inspect –format ‘{{ .State.Pid }}’ ecae75f8b0c5

 

ucp monitoring script

 

#!/bin/bash

#check node cluster for UCP 01 ####

#sudo docker node ls

echo “##################################”
echo “— ping TEST —–”
echo “##################################”
# Program name: pingall.sh
date
cat /users/ovi/docker/work/list_master.txt | while read output
do
ping -c 2 “$output” > /dev/null
if [ $? -eq 0 ]; then
echo “node $output is up”
else
echo “node $output is down”
fi
done
echo “#######################################”
echo ” —– Check space on nodes —- ”
echo “#######################################”

for i in $(cat ucp01_server.txt); do echo “host — $i” ;ssh -q -t $i df -h /app;echo “————————————–“; done

echo “#######################################”
echo ” —– Check uptime —- ”
echo “#######################################”

for i in $(cat ucp01_server.txt); do echo “host — $i” ;ssh -q -t $i uptime|awk -F, ‘{ print $1 }’ ;echo “————————————–“; done

 

push to slack

 

./ovi-slack.sh “ovi3.txt” https://hooks.slack.com/services/ZZZZEJH3B09/BQDTJzzzzzz/9xiCwqRUIHTyyyyyyyyzzzzzzzz

 

#!/bin/bash

cat ovi3.txt | while read LINE; do
(echo “$LINE” | grep -e “$3”) && curl -X POST –silent –data-urlencode \
“payload={\”text\”: \”$(echo $LINE | sed “s/\”/’/g”)\”}” “$2”;
done

 

sysdig

bash-4.2$ service replicated status
Redirecting to /bin/systemctl status replicated.service
● replicated.service – Replicated Service
Loaded: loaded (/etc/systemd/system/replicated.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-02-06 11:01:24 EST; 1 day 4h ago
Main PID: 18381 (code=exited, status=137)
bash-4.2$ hostname -f

 

bash-4.2$ sudo docker ps | grep replicated-ui
e6eb52ba1804 quay.io/replicated/replicated-ui:current “/usr/bin/replicated…” 2 months ago Up 4 seconds 0.0.0.0:8800->8800/tcp replicated-ui

bash-4.2$ sudo docker ps | grep replicated-operator

bash-4.2$ sudo docker start replicated-operator
replicated-operator

bash-4.2$ sudo docker ps | grep replicated-operator
547050e9035d quay.io/replicated/replicated-operator:current “/usr/bin/replicated…” 2 months ago Up 3 seconds replicated-operator

 

check all containers

 

bash-4.2$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2daf5de05bc1 10.240.210.252:9874/sysdig_backend:1091 “/docker-entrypoint.…” 6 minutes ago Up 6 minutes 0.0.0.0:34372->80/tcp, 0.0.0.0:34371->443/tcp, 0.0.0.0:34370->6666/tcp compassionate_mccarthy
6e7efb3b2049 10.240.210.252:9874/haproxy:1.6.14.2 “/docker-entrypoint.…” 6 minutes ago Up 6 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:5443->443/tcp friendly_euler
be4027ccd90d 10.240.210.252:9874/sysdig_backend:1091 “/docker-entrypoint.…” 7 minutes ago Up 7 minutes 0.0.0.0:27878->80/tcp, 0.0.0.0:34369->443/tcp, 0.0.0.0:34368->6666/tcp optimistic_rubin
3ac12661b0fc 10.240.210.252:9874/haproxy:1.6.14.2 “/docker-entrypoint.…” 7 minutes ago Up 7 minutes 0.0.0.0:6443->6443/tcp, 0.0.0.0:6666->6666/tcp dreamy_albattani
55d46701dbc3 10.240.210.252:9874/sysdig_backend:1091 “/docker-entrypoint.…” 7 minutes ago Up 7 minutes 0.0.0.0:34367->80/tcp, 0.0.0.0:34366->443/tcp, 0.0.0.0:27877->6666/tcp agitated_chandrasekhar
5d85809d281f 10.240.210.252:9874/mysql:5.6.41.1 “docker-entrypoint.s…” 7 minutes ago Up 7 minutes 0.0.0.0:3306->3306/tcp trusting_varahamihira
bfee0125a166 10.240.210.252:9874/redis:3.2.12.2 “/entrypoint.sh redi…” 7 minutes ago Up 7 minutes 0.0.0.0:6379->6379/tcp angry_herschel
b0eb4020ed7f 10.240.210.252:9874/cassandra:2.1.20.3 “/entrypoint.sh cass…” 7 minutes ago Up 7 minutes 0.0.0.0:7000-7001->7000-7001/tcp, 0.0.0.0:7199->7199/tcp, 0.0.0.0:9042->9042/tcp, 0.0.0.0:9160->9160/tcp affectionate_goldstine
d851e509f071 10.240.210.252:9874/elasticsearch:5.6.4.1 “/entrypoint.sh -Ene…” 7 minutes ago Up 7 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp vigorous_chaum
0561893fa974 registry.replicated.com/library/statsd-graphite:0.3.2 “/usr/bin/supervisor…” 7 minutes ago Up 7 minutes 0.0.0.0:34365->2443/tcp, 0.0.0.0:32770->8125/udp replicated-statsd
13d9c1f8bc86 registry.replicated.com/library/retraced-api:1.1.12-slim-20180329 “/src/api” 12 minutes ago Up 12 minutes 172.17.0.1:32772->3000/tcp retraced-api
0fb1da5c96b3 registry.replicated.com/library/retraced-processor:1.1.13-slim-20180426 “/src/processor” 12 minutes ago Up 12 minutes 3000/tcp retraced-processor
9e48c698d8c5 registry.replicated.com/library/retraced-cron:1.1.13-slim-20180426 “/bin/sh -c ‘/bin/ba…” 12 minutes ago Up 12 minutes retraced-cron
e0c368b298d5 registry.replicated.com/library/retraced-postgres:10.3-20180329 “docker-entrypoint.s…” 12 minutes ago Up 12 minutes 5432/tcp retraced-postgres
38bde4eec09a registry.replicated.com/library/retraced-nsq:v1.0.0-compat-20180108 “/bin/sh -c nsqd” 12 minutes ago Up 12 minutes 4150-4151/tcp, 4160-4161/tcp, 4170-4171/tcp retraced-nsqd
b8eae7c1fbb4 quay.io/replicated/replicated:current “entrypoint.sh -d” 12 minutes ago Up 12 minutes 0.0.0.0:9874-9879->9874-9879/tcp replicated
547050e9035d quay.io/replicated/replicated-operator:current “/usr/bin/replicated…” 2 months ago Up 9 minutes replicated-operator
e6eb52ba1804 quay.io/replicated/replicated-ui:current “/usr/bin/replicated…” 2 months ago Up 10 minutes 0.0.0.0:8800->8800/tcp replicated-ui
088f09e7b50c registry.replicated.com/library/premkit:1.2.0 “/usr/bin/premkit da…” 5 months ago Up 3 hours 80/tcp, 443/tcp, 2080/tcp, 0.0.0.0:9880->2443/tcp replicated-p

 

https://mysysdig.com:8800

Druid

Druid is an open-source analytics data store designed for business intelligence (OLAP) queries on event data. Druid provides low latency (real-time) data ingestion, flexible data exploration, and fast data aggregation. Existing Druid deployments have scaled to trillions of events and petabytes of data. Druid is most commonly used to power user-facing analytic applications.

docker cp

Description

Copy files/folders between a container and the local filesystem

#docker volume create ovitest

[root@ip-172-…-158 volumes]# docker volume ls | grep ovitest
local ovitest

Start a container with local volum attached

#docker run -d -it –name ovicontainer2 –mount source=ovitest,target=/app alpine:latest

#docker exec -it ovicontainer2 ash

From host create a file a copy to docker

[root@ip-172-…-158 tmp]# vi ovi.txt
[root@ip-172-…-158 tmp]# docker cp ovi.txt ovicontainer2:/app/