istio
Istio is a configurable, open source service-mesh layer that connects, monitors, and secures the containers in a Kubernetes cluster
Istio makes it easy to create a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, with few or no code changes in service code. You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, then configure and manage Istio using its control plane functionality, which includes:
- Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
- Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
Istio is designed for extensibility and meets diverse deployment needs. It does this by intercepting and configuring mesh traffic as shown in the following diagram:
Istio layers on top of Kubernetes, adding containers that are essentially invisible to the programmer and administrator. Called “sidecar” containers, these act as a “person in the middle,” directing traffic and monitoring the interactions between components. The two work in combination in three ways: configuration, monitoring, and management.
Istio features:
- Traffic management
- Security
- telemetry
- vizualization
Reference :
Forseti Security
Forseti Security is a collection of community-driven, open-source tools to help you improve the security of your Google Cloud Platform (GCP) environment
- Inventory
- Scanner
- Enforcer
- Explain
- Email Notification
– Keep track of your environment
– Monitor your policy
– Enforce Rules
– Understand your policy
ovidiu@cloudshell:~ (prod)$ gcloud iam service-accounts list
ovidiu@cloudshell:~/oviterraform (prod)$ gcloud compute instances list
Listed 0 items.
Reference:
GCP
Zones, Regions, Dual-Regions, and Multi-Regions
– Subnets are regional resources
– Because subnets are regional objects, the region you select for a resource determines the subnets it can use.”
– multi-regions and dual-regions are geo-redundant
- Regions are independent geographic areas that consist of zones
- A dual-region is a specific pair of regions
-Cloud KMS resources can be created in the following dual-regional locations
-A dual-region is a specific pair of regions
-Objects stored in a multi-region or dual-region are geo-redundant
– Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles
-Geo-redundancy occurs asynchronously
Currently, nam4 and eur4 are the only Dual-Regions available.
-Geo-redundancy occurs asynchronously
-Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles
A GCP organization’s combined IAM policy at any level of the Cloud Resource Hierarchy is a combination of the policies at that level, plus any policies inherited from higher levels.
Cloud Spanner – Global replication of relational data
BigQuery dataset, Location Types are available : Regional and Multi Regional
Billing
Billing accounts can contain billing sub accounts
Billing Account are connected to Payments Profile
Billing Account user – Link projects to billing accounts
Export billing options
Export Cloud Billing to :
- BigQuery
- Cloud Storage
Billing for resources that participate in a Shared VPC network is attributed to the service project where the resource is located
Cloud IAM
An Organization contains one or more folders. A Folder contains one or more Projects . A Project contains one or more Resources.
-A Role is a collection of permissions
-An IAM Policy object consists of a list of bindings
- Projects can contain resources in different Region
- Projects are configured with default Region and Zone
- You don’t assign permissions to users directly. Instead, you assign them a Role which contains one or more permissions
- Members can be of the following types: Google account, Service account, Google group, G Suite domain, Cloud Identity domain
- A Binding binds a list of members to a role.”
Each GCP project can contain only a single App Engine application, and once created you cannot change the location of your App Engine application
MFA stands for Multi-Factor Authentication, and it is a best practice to use this to secure accounts.
IAM roles can be assigned per bucket.
–Predefined roles are granular and assigned to the service level for much more fine-tuned access
–Primitive roles are broad, project-wide roles assigned to the project level.
Cloud Identity and GSuite are the two ways to centrally manage Google accounts.
KMS – stand for in Cloud KMS – Key management Services
Service Accounts
–Resources not hosted on GCP should use a custom service account key for authentication.
Cloud SDK
The gcloud alpha and gcloud beta commands are two groups of additional Cloud SDK commands that you can install for the gcloud component.
-Maximum Size of a Cloud Storage Bucket – unlimited
-Cloud Storage offers unlimited object storage and individual objects can be as large as 5TB
-Versioning can be enabled on a Cloud Storage Bucket.
gsutil – This is a Cloud SDK component used to interact with Cloud Storage.
to view which project is default, run gcloud config list command. This will list properties for the active configuration, including the default project.
$ gcloud compute instances list
$ gcloud compute ssh
$ gcloud compute ssh ovi@server –dry-run
** *** Snapshot ***
$ gcloud compute snapshots list
$ gcloud compute disks list
$ gcloud compute disks snapshot development-server
$ gcloud compute images list
$ gcloud container clusters list
$ gcloud config list
$ gcloud app versions list
gcloud config configurations create
gcloud config configurations activate
gcloud config set project [ Project_ID]
gcloud logging read “login_name”
gcloud logging read “login_name” –limit 15
DISK
Create disk:
gcloud compute disks create (DISK_NAME) –type=(DISK_TYPE0 –size=(SIZE) –zone=(ZONE)
gcloud compute disks create disk-1 –size=50GB –zone=us-east1-b
Resize disk:
gcloud compute disks resize (disk_name)–size=(size) –zone=(zone)
gcloud compute disks resize disk-1 –size=150 –zone=us-east1-b
Attach disk:
gcloud compute instances attach-disk instance –disk=(disk_name) –zone=(zone)
snapshot
gcloud compute disks snapshot web1 –snapshot-names web1-backup-v1 –zone us-central1-a
gcloud compute snapshots list
gcloud compute snapshots describe web1-backup-v1
-persistent disks will not be deleted when an instance is stopped.
– persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS
Video for reference: Installing the Cloud SDK
View default cloud configuration
gcloud config list
gcloud container clusters get-credentials —> to authenticate and configure kubectl
Preemptible Virtual Machines
Affordable, short-lived compute instances suitable for batch jobs and fault-tolerant workloads.
Go to console
Preemptible VMs are highly affordable, short-lived compute instances suitable for batch jobs and fault-tolerant workloads. Preemptible VMs offer the same machine types and options as regular compute instances and last for up to 24 hours”
// ENABLE PREEMPTIBLE OPTION
gcloud compute instances create my-vm --zone us-central1-b --preemptible
App Engine
- web based workloads, high availability, no ops
Flexible environments are able to use a Dockerfile to create custom runtimes
-App Engine is regional
-App Engine traffic can be split by cookie, by IP address, and at random. We cannot split traffic by zone.
App Engine Standard Environment.
- default timeout setting for a Service Instance deployed to the App Engine Standard Environment is 60 s
- The App Engine Standard environment does not allows Instance Runtimes to be modified
- App Engine Standard Environment does scale down to zero when not in use
App Engine Flexible Environment
- Runtime modifications are allowed for instances running in the App Engine Flexible environment.
In App Engine Flex the connection to Stackdriver (i.e. agent installation and configuration) is handled automatically for you
App Engine Flexible Environment does not scale down to zero
Deploying and Manipulating Multiple App Engine Versions
gcloud app deploy –version 1
canary test
gcloud app deploy –no-promote –version 2
Compute Engine
Managed Instance Group
Unmanaged Instance Group
Unmanaged instance groups do not offer...multi-zonal support
Maximum size of Compute Engine Local Disks – 3 TB
Cloud Functions
-billing interval is – 100 ms
-Horizontal Scaling
– Microservices Architecture
-Cloud Functions does scale down to zero when not in use
Cloud Run
- Uses Stateless HTTP containers
- Scalability
- Built on Knative
Cloud Storage
-Cloud Storage allows Organizations to use CSEKs (Customer Supplied Encryption Keys).
-Data in a regional location operates in a multi-zone replicated configuration
*** create a bucket
$ gsutil mb -c regional -l us-east gs://ovi
$ gsutil versioning get gs://ovi
$ gsutil versioning set on gs://ovi
$ gsutil ls -a gs://ovi
$ gsutil cp <file> gs://ovi
ovi_p_eb632cd8@cloudshell:~ (ovi-24-565a3874)$ gsutil ls gs://ovi11 gs://ovi11/IMG_2759.jpg gs://ovi11/IMG_2770.jpg
ovi@cloudshell:~ (ovi-24-565a3874)$ touch ovi_file ovi@cloudshell:~ (ovi-24-565a3874)$ gsutil cp ovi_file gs://ovi11 Copying file://ovi_file [Content-Type=application/octet-stream]... / [1 files][ 0.0 B/ 0.0 B] Operation completed over 1 objects. ovi@cloudshell:~ (ovi-24-565a3874)$ gsutil ls gs://ovi11 gs://ovi11/IMG_2759.jpg gs://ovi11/IMG_2770.jpg gs://ovi11/ovi_file
Pub/Sub is a messaging service for exchanging event data among applications and services. A producer of data publishes messages to a Pub/Sub topic. A consumer creates a subscription to that topic. Subscribers either pull messages from a subscription or are configured as webhooks for push subscriptions. Every subscriber must acknowledge each message within a configurable window of time.
Cloud Pub/Sub as the messaging service to capture real time data ( ex: IoT )
– is designed to provide reliable, many-to-many, asynchronous messaging between applications (real time IoT data capture)
-Cloud Pub/Sub is designed to handle infinitely-scalable streaming data ingest
Pub/Sub
1. Create a topic.
2. Subscribe to the topic.
3. Publish a message to the topic.
4. Receive the message.
gcloud init
gcloud pubsub topics create ovi-topic
gcloud pubsub subsriptions create –topic ovi-topic ovi-sub
gcloud pubsub topics publish ovi-topic –message “hello”
gcloud pubsub subscriptions pull –auto-ack ovi-sub
gcloud config configurations activate — Activate an existing configuration
gcloud config list — list the settings for the active configuration
App Engine is a Platform as a Service – It is a fully managed solution.
gcloud container cluster resize — this command is used to resize a Kubernetes clusters
ex:
gcloud container clusters resize oviproject –node-pool ‘primary-node-pool’ –num-nodes 25
gcloud config configurations create — create and activate a new configuration
Log sinks can be exported to Cloud Pub/Sub.
Storage Option
- Multi-Regional – Data accessed frequently with highest availability / Geo-redundant
- Regional – Data accessed frequently within region / Regional, redundant across availability zones
- Nearline – Data accessed less than once per month / Regional / Store infrequently accessed content
- Coldline – Data accessed less than once per year / Regional / Archive storage, backup, Disaster recovery
Coldline Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs
-Lifecycle management policies can be submitted via JSON format.
Cloud SQL
– Read replicas and failover replicas are charged at the same rate as stand-alone instances
-Cloud SQL for PostgreSQL does not yet support replication from an external master or external replicas for Cloud SQL instances
“This functionality is not yet supported for PostgreSQL instances
-GCP Cloud SQL provides which of the following Backup types : automated backups , on-demand backups
-Cloud SQL read replicas and failovers must be in the same region. The failover must be in a different zone in the same region.
-Cloud SQL is a relational database and not the best fit for time-series log data formats
Cloud Spanner
Cloud Spanner scales horizontally and serves data with low latency while maintaining transactional consistency
After you create an instance, you cannot change the configuration of that instance later
Cloud Spanner Instance Configuration can be set to which of the following Location: regional, multi-regional
Cloud Spanner is a SQL/relational database.
Cloud Spanner acts is a SQL database that is horizontally scalable for cross-region support and can host large datasets.
BigQuery – Calculating cost
UI: query validator
CLI: –dry-run
REST: dryRun Property
-BigQuery is the only one of these Google products that supports an SQL interface
-BigQuery is billed based on the amount of data read. The dry-run flag is used to determine how many bytes are going to be read.
-Analytics DataWare house
-Use a BigQuery with table partitioning
-BigQuery is the best choice for data warehousing
-BigQuery does not offer low latency and millisecond response time
-The Big Query instance Labels and Display Name can be modified without any downtime
–BigQuery is a serverless warehouse for analytics and supports the volume and analytics requirement
– move large datasets directly to BigQuery, consider BigQuery Data Transfer Service, which automates data movement from SaaS applications to Google BigQuery on a scheduled, managed basis
Cloud Bigtable
A petabyte-scale, fully managed NoSQL database service for large analytical and operational workloads.
- Bigtable is priced by provisioned node
- Bigtable does not autoscale
- Bigtable does not store data in GCS
- Bigtable is not made for store large objects
Use Cloud Bigtable as the storage engine for large-scale, low-latency applications as well as throughput-intensive data processing and analytics.
Apache HBase is Open Source version of Bigtable
-Each cluster is located in a single zone
-Maximum number of Clusters for a Cloud Bigtable Instance is – 4
After creating a Cloud Bigtable instance, any of the following settings can be updated without any downtime:
– The application profiles for the instance, which contain replication settings
– Upgrade a development instance to a production instance
– The number of nodes in each cluster
– The number of clusters in the instance
Cloud BigTable
– Service is ideal for Time-Series data
– ideal for applications requiring very high read/write throughput and can store Petabytes of unstructured data
– can be deployed zonal
-Bigtable is not a relational database.
-Cloud Bigtable provides the ability to isolate workloads by allowing applications to connect to specific Clusters
-Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency
Cloud Datastore
-Datastore can be queried, it’s fully managed, and is a great option for catalog based applications. Datastore also supports a basic query/filter syntax.
-Datastore is a managed NoSQL database well suited to mobile applications
– Cloud Datastore queries can deliver their results at either of two consistency levels:
-Strongly consistent queries guarantee the freshest results, but may take longer to complete.
-Eventually consistent queries generally run faster, but may occasionally return stale results.
-You can store your Datastore mode data in either a multi-region location or a regional location
Cloud Firestore is the next generation of Cloud Datastore
Firestore
Easily develop rich applications using a fully managed, scalable, and serverless document database
Cloud Dataflow – service of processing large volume of data
- Cloud Dataflow provides you with a place to run Apache Beam based jobs, on GCP
- Cloud Dataflow provides for both streaming and batch pipelines
- uses cases
( Serverless ETL, processing data from IoT Devices, processing Data from POS systems)
– a fully managed ETL/ELT service for transforming, transporting, and enriching data
– Dataflow is built on top of Apache Beam and is ideal for new, cloud-native batch and streaming data processing
Cloud Dataproc – o handle existing Hadoop/Spark jobs. ( Use it to replace existing hadoop infra.)
Dataproc should be used if the processing has any dependencies to tools in the Hadoop ecosystem.
Cloud Dataproc can leverage Preemptive Compute Engine VMs
Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Dataproc is for managed Hadoop/Spark workflows
–Cloud Dataproc has built-in integration with other Google Cloud Platform services, such as BigQuery, Cloud Storage, Cloud Bigtable, Stackdriver Logging, and Stackdriver Monitoring, so you have more than just a Spark or Hadoop cluster, you have a complete data platform”
Cloud Dataproc and Cloud Dataflow can both be used for data processing, and there’s overlap in their batch and streaming capabilities
Cloud Dataparse
Cloud Composer
Cloud Composer is a fully managed workflow orchestration service that empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers
A fully managed workflow orchestration service built on Apache Airflow.
Preemtible instances are short lived instances ( 24 hours maxim )
-A static website can be hosted with cloud storage for very little money.
Cloud Functions
billing interval for Cloud Functions is 100 ms
Apigee – Design, Secure, Publish, Analyze, Monitor, and Monetize APIs
Cloud Functions support : Go,node,python
Cloud Datastudio ( similar to Tableau, Power BI )
Data Studio is able to easily create useful charts from live BigQuery data to get insight.
Security
Cloud Audit Log
GCP Service maintains logs for each GCP Project, Folder, and Organization
Cloud Security Scanner
Cloud Armour
works with Global HTTP(S) Load Balancers to deliver defense against DDoS (Distributed Denial of Service) attacks
Data Loss Prevention API
-Use the Data Loss Prevention API to automatically detect and redact sensitive data
-Fully managed service designed to help you discover, classify, and protect your most sensitive data
Trusted Platform Module (TPM)
Cloud Code
– provides everything you need to write, debug, and deploy Kubernetes applications
Cloud Source
– is a GCP Service that is used for Code Version Control
Cloud TPU
GCP Service provides a custom-designed family of ASIC (Application-Specific Integrated Circuit) hardware accelerators, which are specifically for machine learning
Cloud Datafusion ( similar to Cloud Dataflow)
Cloud Data Catalog
provides Organizations with a central location to discover, manage, and understand all their data in the Google Cloud
Cloud Memorystore
Cloud IoT Core
provides the ability to securely connect, manage, and ingest data from globally dispersed devices
MQTT stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks.
Cloud Firestore is the next generation of Cloud Datastore
Cloud Build
Cloud Source
Cloud Dataprep
- provides features to visually explore, scrub, clean, and prepare structured and unstructured data
- Dataprep cleans data in a web interface format using data from Cloud Storage or Bigquery.
- Dataprep is a UI driven data preparation service that runs on top of Cloud Dataflow
Cloud Datalab
-is a data exploration tool which provides an intuitive notebook format to combine code, results, and visualizations
– is most useful for Data Scientists
StackDriver
-Once logs are past their retention period and are deleted, they are permanently gone. Export logs to Cloud Storage or BigQuery for long-term retention
-Performance statistics would be best served viewing in Stackdriver Monitoring using custom metrics.
Stackdriver has an integrated service to export logs for Analysis to: BigQuery, Pub/Sub, Storage
Cloud endpoints
-GCP Service provides API Management by using either Frameworks for App Engine, OAS (OpenAPI Specification), or gRPC
-Develop, deploy, protect, and monitor your APIs with Cloud Endpoint
Apigee
provides the ability to Design, Secure, Publish, Analyze, Monitor, and Monetize APIs?
Deployment manager
gsutil -m cp -r gs://ovi/deployment-manager/* .
gcloud deployment-manager deployments create my-vm –config vm-web.yaml
gcloud deployment-manager deployments create vpcs –config vpc-dependencies.yaml
gcloud deployment-manager deployments describe vpcs
gcloud deployment-manager deployments delete vpcs
Machine Types:
General-purpose: n1
n1-standard
n1-highcpu
n1-highmem
Compute-optimized: c2
c2-standard
Memory-optimized: n1, m2
n1-ultramem
n1-megamem
m2-ultramem
Shared-core:
f1-micro
g1-small
to initialize gcloud: simply `gcloud init` and follow the prompts. And this also configures `gsutil` and `bq`.
Networking
VPC
–GCP VPC are global
-GCP Resources within a single VPC Subnet must be within same region (Subnets are regional resources)
-VPC network peering provides cross-project VPC communication within the same or different organizations
-VPC Network Peering and Shared VPC are methods for connecting two GCP VPC, not for connecting an On-Prem network to GCP Cloud Services
Shared VPC ( two main components )
- Host Project
- Service Project
Billing for resources that participate in a Shared VPC network is attributed to the service project where the resource is located
– VPC Network Peering is only between two Google Cloud
- Each Cloud VPN tunnel can support up to 3 Gbps. Actual bandwidth depends on several factors
– Direct Peering exists outside of Google Cloud Platform
-(Direct Peering) can be used by GCP, but does not require it.
-Direct Peering can be used for G Suite Platform, existing outside of GCP
-you can’t use Google Cloud VPN in combination with Dedicated Interconnect, but you can use your own VPN solution.”
-you can’t use Google Cloud VPN in combination with Partner Interconnect, but you can use your own VPN solution.”
Dedicated Interconnect
- find a collocation facility
- Connect On-premise to Collocation
- Order LOA-CFA ( Letter of Authorization and Connecting Facility Assignment )
Partner Interconnect
Cloud VPN
Cloud load balancer
Global HTTP(s) – Cloud Load Balancer offers cookie-based Session Affinity
Global HTTP(S) – can be configured for use as a CDN (Content Delivery Network)?
Global SSL proxy – type of Cloud Load Balancer is intended for Global SSL Encrypted Traffic that is not HTTP(S)
Global TCP proxy – type of Cloud Load Balancer is intended for Global Traffic that is not HTTP(S) and not SSL Encrypted
Other data transfer options
- Cloud Storage Transfer Service: Quickly imports online data into Google Cloud Storage.
- Google BigQuery Data Transfer Service: Automates data movement from Software as a Service (SaaS) applications such as Google Ads and Google Ad Manager on a scheduled, managed basis.
Cases Study
TerramEarth
- Cloud IoT Core
- Cloud Dataflow
- Cloud BigQuery
- Cloud ML Engine
- Cloud Datalab
- Datastudio
signed URL
- Allows timed access with a URL link.
- Allows someone object access without requiring them to have a GCP account.
Security
-Forseti security
AWS Fargate
- AWS Fargate (Run containers directly, without any EC2 instances)
AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.
With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers.
This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.
AWS Fargate removes the need for you to interact with or think about servers or clusters.
Fargate lets you focus on designing and building your applications instead of managing the infrastructure that runs them.
PV and PVC in K8S
# Please edit the object below. Lines beginning with a ‘#’ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{“apiVersion”:”v1″,”kind”:”PersistentVolume”,”metadata”:{“annotations”:{},”labels”:{“com.rbccm.appcode”:”ovi”},”name”:”ovi-dev2-phoenix”,”namespace”:””},”spec”:{“accessModes”:[“ReadWriteMany”],”capacity”:{“storage”:”50Gi”},”mountOptions”:[“nfsvers=3″],”nfs”:{“path”:”/ovi/kube/dev2″,”server”:”nfs.server.com”},”persistentVolumeReclaimPolicy”:”Retain”,”storageClassName”:”ovi”}}
pv.kubernetes.io/bound-by-controller: “yes”
creationTimestamp: 2019-05-02T21:34:10Z
finalizers:
– kubernetes.io/pv-protection
labels:
com.ovi.appcode: ovi
name: ovi-dev2-phoenix
resourceVersion: “203315676”
selfLink: /api/v1/persistentvolumes/ovi-dev2-phoenix
uid: 05de00f4-6d22-11e9-a8b7-0242ac11yyy
spec:
accessModes:
– ReadWriteMany
capacity:
storage: 50Gi
claimRef: ———– >>>> if PVC is deleted —remove this part in spec ( PV is in released Status )
apiVersion: v1
kind: PersistentVolumeClaim
name: ovi-dev2-phoenix
namespace: te20-dev2
resourceVersion: “203305105”
uid: 7a0cbf09-fa8b-11e9-a244-0242ac11yyy
mountOptions:
– nfsvers=3
nfs:
path: /ovi/kube/dev2
server: nfs.server.com
persistentVolumeReclaimPolicy: Retain
storageClassName: ovi
status:
phase: Bound
LimitRange
A limit range, defined by a LimitRange object, provides constraints that can:
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
bash-4.2$ kubectl describe limitranges -n ovi
Name: ovi-limitrange
Namespace: ovi
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
—- ——– — — ————— ————- ————————————————————
Container memory 64Mi 32Gi 64Mi 512Mi 10
Container cpu 100m 16 100m 500m 8
Edit limit range
bash-4.2$ kubectl edit limitranges -n ovi
# Please edit the object below. Lines beginning with a ‘#’ will be ignored,
2 # and an empty file will abort the edit. If an error occurs while saving this file will be
3 # reopened with the relevant failures.
4 #
5 apiVersion: v1
6 kind: LimitRange
7 metadata:
8 annotations:
9 kubectl.kubernetes.io/last-applied-configuration: |
10 {“apiVersion”:”v1″,”kind”:”LimitRange”,”metadata”:{“annotations”:{},”name”:”ovi-limitrange”,”namespace”:”ovi”},”spec”:{“limits”:[{“default”:{“cpu”:”500m”,”m emory”:”512Mi”},”defaultRequest”:{“cpu”:”100m”,”memory”:”64Mi”},”max”:{“cpu”:”16″,”memory”:”32Gi”},”maxLimitRequestRatio”:{“cpu”:”8″,”memory”:”10″},”min”:{“cpu”:”10 0m”,”memory”:”64Mi”},”type”:”Container”}]}}
11 creationTimestamp: 2019-02-13T15:54:44Z
12 name: ovi-limitrange
13 namespace: ovi
14 resourceVersion: “127366586”
15 selfLink: /api/v1/namespaces/te20/limitranges/ovi-limitrange
16 uid: ae91e929-2fa7-11e9-9dd5-0242ac110005
17 spec:
18 limits:
19 – default:
20 cpu: 500m
21 memory: 512Mi
22 defaultRequest:
23 cpu: 100m
24 memory: 64Mi
25 max:
26 cpu: “16”
27 memory: 32Gi
28 maxLimitRequestRatio:
29 cpu: “8”
30 memory: “10”
31 min:
32 cpu: 100m
33 memory: 64Mi
34 type: Container
k8S
kubectl get deployment -n kube-sysdig
kubectl scale deployment sysdigcloud-api –replicas=0 -n kube-sysdig
kubectl scale deployment sysdigcloud-worker –replicas=0 -n kube-sysdig
kubectl scale deployment sysdigcloud-collector –replicas=0 -n kube-sysdig
kubectl get deployment -n kube-sysdig
kubectl scale deployment sysdigcloud-worker –replicas=1 -n kube-sysdig
kubectl scale deployment sysdigcloud-collector –replicas=1 -n kube-sysdig
kubectl scale deployment sysdigcloud-api –replicas=1 -n kube-sysdig
kubectl get deployment -n kube-sysdig
bash-4.2$ kubectl logs sysdigcloud-collector-bc45fc -n kube-sysdig
*** Resource quota & limitrange
kubectl get limitrange –namespace=ovi –output=yaml
C:\ovi\docker
λ kubectl describe limitrange –namespace=ovi
Name: ovi-limitrange
Namespace: wdu0
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
—- ——– — — ————— ————- ———————–
Container cpu 100m 4 100m 500m –
Container memory 64Mi 4Gi 64Mi 512Mi
C:\ovi\docker
λ kubectl describe quota ovi-resourcequota -n ovi
Name: ovi-resourcequota
Namespace: ovi
Resource Used Hard
——– —- —-
limits.cpu 5500m 8
limits.memory 5632Mi 12Gi
pods 4 10
requests.cpu 2300m 8
requests.memory 2624Mi 12Gi
C:\ovi\docker
λ kubectl get limitrange –namespace=ovi –output=yaml
kubectl edit quota ovi-resourcequota -n ovi
kubectl edit limitrange –namespace=wdu0 –output=yaml
λ kubectl get apiservices
NAME AGE
v1. 1y
v1.apps 309d
v1.authentication.k8s.io 1y
v1.authorization.k8s.io 1y
v1.autoscaling 1y
v1.batch 1y
v1.networking.k8s.io 1y
v1.rbac.authorization.k8s.io 309d
v1.storage.k8s.io 1y
v1alpha1.admissionregistration.k8s.io 1y
v1beta1.admissionregistration.k8s.io 309d
v1beta1.apiextensions.k8s.io 1y
v1beta1.apps 1y
v1beta1.authentication.k8s.io 1y
v1beta1.authorization.k8s.io 1y
v1beta1.batch 1y
v1beta1.certificates.k8s.io 1y
v1beta1.compose.docker.com 58d
v1beta1.events.k8s.io 309d
v1beta1.extensions 1y
v1beta1.metrics.k8s.io 228d
v1beta1.policy 1y
v1beta1.rbac.authorization.k8s.io 309d
v1beta1.scheduling.k8s.io 309d
v1beta1.storage.k8s.io 1y
v1beta2.apps 1y
v1beta2.compose.docker.com 58d
v2beta1.autoscaling 1y
k8s – cert request
Create CSR:
Upload CSR file
Convert the key and the cer to base 64
Update Secret yaml
Update Ingress
$ openssl genrsa -out mysite.ovi.com.key 2048
$ openssl req -out mysite.ovi.com.csr -key mysite.ovi.com.key -new -sha256
Convert the key and the cer to base 64
$ cat mysite.ovi.com.cer | base64 -w 0
$ cat mysite.ovi.com.key | base64 -w 0
Update Secret yaml
apiVersion: v1
kind: Secret
metadata:
name: ovi-cert
data:
#tls.crt: <base64_encoded_cert>
tls.crt:
ASOtLS1CRUdJTiBDRVJUSUZJ…ZzDQpaU0JrZFNCRFlXNWhaR0V4U1RCSEJnTlZCQXNUUUZKdmV…pJZ1FtRnVjWFZsSUZKdmVXRaR0V3SGhjTk1…VRUJoTUNRMEV4T….0tLS0NCg==
#tls.key: <base64_encoded_key>
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVkt…cHU5ZUJDQWJZR3JUaWo1ejVTMmxKRTM1VW…WMkk2WW1QMXVzc1ZsdjRBd3U3O…ZRdzBSYVQ1WGovBQUklWQVRFIEtFWS0tLS0tCg==
type: Opaque
Update Ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: “true”
spec:
tls:
– hosts:
– mysite.ovi.com
secretName: ovi-cert
tensorflow
ash-4.2$ sudo docker pull tensorflow/tensorflow:nightly-py3-jupyter
nightly-py3-jupyter: Pulling from tensorflow/tensorflow
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
f0f373a09782: Extracting [=========================> ] 88.57MB/171.8MB
d7522288648d: Download complete
5e538e77e6eb: Download complete
841db8c4b878: Downloading [=======> ] 43.67MB/289.2MB
26.3MB/289.2MBwnload complete
93080a73b48a: Download complete
424b59aaa775: Download complete
d0353440d65a: Download complete
6144de382f6d: Download complete
729600b8b14c: Download complete
261ae3aa721e: Download complete
500c259546bd: Download complete
f0da77afb904: Download complete
29f0add3b34f: Download complete
7fcf09da4efd: Download complete
e18ee217efa4: Download complete
d2e0a117e7ee: Download complete
….
bash-4.2$ sudo docker pull tensorflow/tensorflow:nightly-py3-jupyter
nightly-py3-jupyter: Pulling from tensorflow/tensorflow
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
f0f373a09782: Pull complete
d7522288648d: Pull complete
5e538e77e6eb: Pull complete
841db8c4b878: Pull complete
a5d3dc5c0fca: Pull complete
93080a73b48a: Pull complete
424b59aaa775: Pull complete
d0353440d65a: Pull complete
6144de382f6d: Pull complete
729600b8b14c: Pull complete
261ae3aa721e: Pull complete
500c259546bd: Pull complete
f0da77afb904: Pull complete
29f0add3b34f: Pull complete
7fcf09da4efd: Pull complete
e18ee217efa4: Pull complete
d2e0a117e7ee: Pull complete
Digest: sha256:151cb2361ba2dd49803430cc22f46f659ad39bc5dc84ae36170212a9500e6780
Status: Downloaded newer image for tensorflow/tensorflow:nightly-py3-jupyter
docker.io/tensorflow/tensorflow:nightly-py3-jupyter
bash-4.2$ sudo docker tag tensorflow/tensorflow:nightly-py3-jupyter ovi.com/tensorflow:nightly-py3-jupyter
bash-4.2$ sudo docker push ovi.com/tensorflow:nightly-py3-jupyter
The push refers to repository [ovi.com/tensorflow]
fafed7e084c2: Pushed
871d4a411a5f: Pushed
7cfe37b7c673: Pushed
af6c44609326: Pushed
82f6a46e7b5c: Pushed
a23b6f909550: Pushed
6da19faece2a: Pushed
42f607d58fee: Pushed
d0e89b0b2f9b: Pushed
8e74e0b521d3: Pushed
d3a211c15a8c: Pushed
ec00cb7f1f0c: Pushed
d5dfb4826643: Pushed
0b6a0645d4c9: Pushed
1bc4bc65b435: Pushed
7fc7e05f494e: Pushed
03f407d8a81a: Pushed
b079b3fa8d1b: Pushed
a31dbd3063d7: Pushed
c56e09e1bd18: Pushed
543791078bdb: Pushed
nightly-py3-jupyter: digest: sha256:151cb2361ba2dd49803430cc22f46f659ad39bc5dc84ae36170212a9500e6780 size: 4706
C:\ovi\docker
λ docker run -it –rm ovi.com/tensorflow:nightly-py3-jupyter bash
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user’s userid:
$ docker run -u $(id -u):$(id -g) args…
root@448e4b7c17ec:/tf# ls -l
total 4
drwxrwxrwx 1 root root 4096 Aug 6 14:39 tensorflow-tutorials
root@448e4b7c17ec:/tf#
root@448e4b7c17ec:/tf/tensorflow-tutorials# ls -l
total 64
-rw-rw-r– 1 root root 69 Aug 6 12:35 README.md
-rw-r–r– 1 root root 31080 Aug 6 14:39 basic_classification.ipynb
-rw-r–r– 1 root root 26683 Aug 6 14:39 basic_text_classification.ipynb
root@448e4b7c17ec:/tf/tensorflow-tutorials# more README.md
Want more tutorials like these?
Check out tensorflow.org/tutorials!
root@bc1c62c1274b:/tf# python –version
Python 3.6.8
bash-4.2$ sudo docker pull tensorflow/tensorflow:nightly
nightly: Pulling from tensorflow/tensorflow
7413c47ba209: Already exists
0fe7e7cbb2e8: Already exists
1d425c982345: Already exists
344da5c95cec: Already exists
e6ea787a31d6: Pull complete
f1e1aa574d76: Pull complete
c21ee1afa4b3: Pull complete
36072ce77e68: Pull complete
d1b7231452fa: Pull complete
dd94042e9d39: Pull complete
Digest: sha256:4be7452f34ac3ac0006d4ddce8b7684b8d2149c8b2531ab10d04b4d43d533e32
Status: Downloaded newer image for tensorflow/tensorflow:nightly
docker.io/tensorflow/tensorflow:nightly
bash-4.2$ sudo docker tag tensorflow/tensorflow:nightly ovi.com/shared/tensorflow:nightly
bash-4.2$ sudo docker push ovi.com/tensorflow:nightly
The push refers to repository [ovi.com/tensorflow]
57d2de5cd2f5: Pushed
c2d60a8e5af8: Pushed
5a988b036c43: Pushing [===> ] 43.38MB/687.2MB
e754593b8b9a: Pushed
3555e11170cf: Pushed
a7e38894aa41: Pushing [===> ] 26.8MB/366.3MB
b079b3fa8d1b: Layer already exists
a31dbd3063d7: Layer already exists
c56e09e1bd18: Layer already exists
543791078bdb: Layer already exists