λ kubectl cluster-info
Kubernetes master is running at https://ucp.yyy.com:6443
KubeDNS is running at https://ucp.yyy.com:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
C:\ovi\docker
Create a namespace called ‘ovi’
kubectl create namespace ovi
C:\ovi\docker
λ kubectl get namespaces | grep ovi
ovi Active 117d
ovitest Active 75d
C:\ovi\docker
λ kubectl create namespace ovi2
namespace “ovi2” created
C:\ovi\docker
λ kubectl delete namespace ovi2
namespace “ovi2” deleted
C:\ovi\docker
λ kubectl get namespaces ovi
NAME STATUS AGE
ovi Active 118d
C:\ovi\docker
λ kubectl run nginx –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine –restart=Never -n ovitest
pod “nginx” created
Deploy first app
1.
C:\ovi\Kubernetes_labT
λ kubectl run –image=dtr.yyy.com/dev.yyy.com/nginx:1.13.5-alpine ovi-nginx -n ovi
2.
C:\ovi\Kubernetes_labT
λ kubectl expose deployment ovi-nginx –type=NodePort –port=80 -n ovi
deployment “ovi-nginx” created
3.
C:\ovi\Kubernetes_labT
λ kubectl describe service ovi-nginx -n ovi
Name: ovi-nginx
Namespace: ovi
Labels: run=ovi-nginx
Annotations: <none>
Selector: run=ovi-nginx
Type: NodePort
IP: 10.96.195.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 33368/TCP
Endpoints: 192.168.158.31:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
4.
C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME READY STATUS RESTARTS AGE
ovi-nginx-97db6b48b-zt6tk 1/1 Running 0 15m
run-ovinginx-7559884b7b-7nps6 1/1 Running 0 2d
5.
λ kubectl describe pods ovi-nginx-97db6b48b-zt6tk -n ovi
6. test from browser
Other commands
C:\ovi\Kubernetes_labT
λ kubectl logs ovi-tomcat8-5cd599dd45-bzgk7 -n ovi
….
06-Feb-2019 20:20:03.407 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“http-nio-8080”]
06-Feb-2019 20:20:03.413 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [“ajp-nio-8009”]
06-Feb-2019 20:20:03.414 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 618 ms
C:\ovi\docker
λ kubectl get pods –namespace=ovitest
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 9m
C:\ovi\docker
λ kubectl get pods –all-namespaces | grep ovitest
ovitest nginx 1/1 Running 0 3m
kubectl describe pods nginx –namespace=ovitest
C:\ovi\docker
λ kubectl describe pods/nginx -n ovitest
C:\ovi\docker
λ kubectl get pod -o wide –namespace=ovitest
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 59m 192.168.80.2 ovi.com
C:\ovi\docker
λ kubectl get namespaces –show-labels
kubectl run nginx –image=nginx –restart=Never -n ovi
λ kubectl get po –all-namespaces -o wide
Check ingress-controller
C:\ovi\docker
λ kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-67f6f4bdc-n6fdt 0/1 ContainerCreating 0 3d
nginx-ingress-controller-584dc49b67-qjbpw 0/1 Init:0/1 0 3d
$ kubectl get pod -n kube-system
C:\ovi\Kubernetes_labT
λ kubectl get pods -n ovi
NAME READY STATUS RESTARTS AGE
ovi-nginx-97db6b48b-zt6tk 1/1 Running 0 1h
ovi-tomcat-56d8469df-rgd62 0/1 ImagePullBackOff 0 31m
ovi-tomcat8-5cd599dd45-bzgk7 1/1 Running 0 26m
run-ovinginx-7559884b7b-7nps6 1/1 Running 0 2d
C:\ovi\Kubernetes_labT
λ kubectl run nginx –image=nginx –restart=Never -n mynamespace –dry-run -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
– image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Edit ResourceQuota
C:\ovi\docker
λ kubectl describe quota -n ovi
Name: ovi-resourcequota
Namespace: ovi
Resource Used Hard
——– —- —- ————
limits.cpu 3500m 8
limits.memory 7Gi 16Gi
pods 7 24
requests.cpu 3100m 8
requests.memory 6720Mi 16Gi
C:\ovi\docker
λ kubectl get resourcequota ovi-resourcequota –namespace=ovi –output=yaml
C:\ovi\docker
λ kubectl edit resourcequota ovi-resourcequota –namespace=ovi
you can log to container
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} — ${CMD} ${ARG1} ${ARG2} … ${ARGN}
C:\ovi\docker
λ kubectl describe nodes 2>&1 | grep -i Disk
Reasons k8S Deployments Faill
- Wrong Container Image / Invalid Registry Permissions
- Application Crashing after Launch
- Missing ConfigMap or Secret
- Liveness/Readiness Probe Failure
- Exceeding CPU/Memory Limits
- Resource Quotas
- Insufficient Cluster Resources
- PersistentVolume fails to mount
- Validation Errors
- Container Image Not Updating
View the containers in the pod:
docker ps
Get the process ID for the container:
docker inspect --format '{{ .State.Pid }}' [container_id]
Use nsenter to run a command in the process’s network namespace:
nsenter -t [container_pid] -n ip addr