AWS- clean up

to keep costs under control for AWS we will clean up below :

  • clean up idle EC2 Instance
  • clean up EBS Volumes which are unattached / all the volumes available to remove
  • clean up snapshots orphans / delete all Snapshots which are not in use
  • all Snapshots not in use by any AMI’s
  • Release elastic IPs when they’re not in use

 

 

#aws ec2 describe-regions | grep -i regionname
“RegionName”: “ap-south-1”
“RegionName”: “eu-west-1”
“RegionName”: “ap-northeast-2”
“RegionName”: “ap-northeast-1”
“RegionName”: “sa-east-1”
“RegionName”: “ap-southeast-1”
“RegionName”: “ap-southeast-2”
“RegionName”: “eu-central-1”
“RegionName”: “us-east-1”
“RegionName”: “us-east-2”
“RegionName”: “us-west-1”
“RegionName”: “us-west-2”

 

# aws ec2 describe-volumes > describe_volumes.txt

# aws ec2 describe-volumes –region us-west-1 > describe_volumes_us-west1.txt

# aws ec2 describe-snapshots > describe_snapshots

 

 

➜ # more describe_snapshots | grep -i SNAPSHOT | awk ‘{print $2}’ | sort | uniq | wc -l
15445

# aws ec2 describe-volumes –region us-west-1 | grep -i available
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,

 

 

 

Reference :

aws clean up

http://www.robertsindall.co.uk/blog/how-to-clean-up-amazon-ebs-volumes-and-snapshots/

Detect useless Snapshots and Volumes in the Amazon EC2 Cloud

http://cloudacademy.com/blog/how-to-manage-ebs-volumes-snapshots-in-aws/

aws – metadata

Instance metadata is data about your instance tah you can use to configure or manage the running instance

[root@ip-10-192- ]# curl http://169.254.169.254/latest/meta-data/

ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-keys/
reservation-id
security-groups

services/

[root@ip-10-] curl http://169.254.169.254/latest/meta-data/ami-id ; echo
ami-de347abc

 

[root@ip-10-192-10]# curl http://169.254.169.254/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
2011-01-01
2011-05-01
2012-01-12
2014-02-25
2014-11-05
2015-10-20
2016-04-19
2016-06-30

root@ip-10-192]# curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key

Check public IP behind NAT

[root@ip-10-192- ]# wget -qO- http://ipecho.net/plain ; echo
50.18.yyy.yy

EBS – volume in linux

After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it

[root@ip-172-30… //]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   2G  0 disk

[root@ip-172-30 //]# file -s /dev/xvdf
/dev/xvdf: data

If the output of the previous command shows simply data for the device, then there is no file system on the device and you need to create

[root@ip-172-30-0-59 //]# mkfs -t ext4 /dev/xvdf
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 33193f80-886e-41ad-858e-6be5a4dde19e
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

after format, check again

[root@ip-172-30-//]# file -s /dev/xvdf
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=33193f80-886e-41ad-858e-6be5a4dde19e (extents) (large files) (huge files)

 

[root@ip-172-30- /]# ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root  80 Oct  4 14:16 .
drwxr-xr-x 7 root root 140 Oct  4 14:16 ..
lrwxrwxrwx 1 root root  10 Oct  4 14:16 33193f80-886e-41ad-858e-6be5a4dde19e -> ../../xvdf
lrwxrwxrwx 1 root root  11 Oct  4 14:17 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvda1

edit /etc/fstab

[root@ip-172-30-0-235 /]# cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/xvdf   /apps       ext4    defaults        0   0

 

create a directory apps

# mkdir  apps

#mount -a

test

 

[root@ip-172-30- /]# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8123812 3819192   4204372  48% /
devtmpfs          498816      60    498756   1% /dev
tmpfs             509664       0    509664   0% /dev/shm
/dev/xvdf        1998672    3076   1874356   1% /apps

 

With Amazon EBS encryption, you can now create an encrypted EBS volume and attach it to a supported instance type. Data on the volume, disk I/O,
and snapshots created from the volume are then all encrypted. The encryption occurs on the servers that host the EC2 instances, providing
encryption of data as it moves between EC2 instances and EBS storage. EBS encryption is based on the industry standard AES-256
cryptographic algorithm.
** Snapshots that are taken from encrypted volumes are automatically encrypted.
** Volumes that are created from encrypted snapshots are also automatically encrypted.

Public snapshots of encrypted volumes are not supported, but you can share an encrypted snapshot with specific accounts if you
take the following steps:

– Use a custom CMK, not your default CMK, to encrypt your volume.
– Give the specific accounts access to the custom CMK.
– Create the snapshot.
– Give the specific accounts access to the snapshot.

– You cannot snapshot an EC2 instance store volume.

AWS – CLI

Configure AWS CLI

[root@ip-10-0- ~]# aws configure
AWS Access Key ID [****************PMOQ]:
AWS Secret Access Key [****************k2Af]:
Default region name [None]: us-west-2
Default output format [None]:

 

AWS CLI structure 

$ aws <command> <sub command> [option and parameters]

Parameters can take various types of input values, such as number, string list, maps and JSON structures.

To verify that your CLI tools are set up correctly, run the following command:

# aws –version
aws-cli/1.10.8 Python/2.7.10 Linux/4.4.5-15.26.amzn1.x86_64 botocore/1.4.53

 

#ec2-describe-regions

 

[root@ip-10-~]# aws ec2 describe-volumes
You must specify a region. You can also configure your region by running “aws configure”.

[root@ip-10-0-1-205 ~]# aws ec2 describe-instances > output.json

[root@ip-10-0 ~]# aws ec2 describe-volumes –query ‘Volumes[*]{ID:VolumeId,InstanceId:Attachments[0].InstanceId,AZ:AvailabilityZone,Size:Size}’

[
{
“InstanceId”: “i-5496****”,
“AZ”: “us-west-2c”,
“ID”: “vol-267*****”,
“Size”: 100
},
{
“InstanceId”: “i-7ff****”,
“AZ”: “us-west-2b”,
“ID”: “vol-70a******”,
“Size”: 8
},

…………..

{
“InstanceId”: “i-f281****”,
“AZ”: “us-west-2a”,
“ID”: “vol-155****”,
“Size”: 8
}
]

 

 

create a bucket 

[root@ip-10-0-1…~]# aws s3api create-bucket –bucket ovia-bucket –region us-east-1

{
“Location”: “/ovia-bucket”
}

[root@ip-10-0-1-205 ~]# aws s3 ls
2016-08-19 22:10:56 cf-templates-9d1maiwyivrc-us-west-2
2016-09-21 01:50:45 ovia-bucket
2011-09-05 12:08:04 ovidiu
2016-01-23 14:33:51 ovidocs

copy a local file to s3

[root@ip-10-0-1-~]# aws s3 cp ovi.sh s3://ovia-bucket/ovi3.sh
upload: ./ovi.sh to s3://ovia-bucket/ovi3.sh

[root@ip-10-0-1 ~]# aws s3 ls s3://ovia-bucket
2016-09-21 03:09:57 31 ovi3.sh

copy a file from S3 to S3

[root@ip-10-0-1- ~]# aws s3 cp s3://ovia-bucket/ovi3.sh s3://ovidocs
copy: s3://ovia-bucket/ovi3.sh to s3://ovidocs/ovi3.sh

[root@ip-10-0-1-205 ~]# aws s3 ls s3://ovidocs
2016-09-21 03:12:34 31 ovi3.sh

[root@ip-10-0-1-205 ~]# curl http://169.254.169.254/latest/meta-data/block-device-mapping/
ami

create a ec2 instance via cli 

#aws ec2 run-instances –image-id ami-7172b611 –count 1 –instance-type t2.micro –key-name ovi –security-group-ids sg-ceb8eaaa –subnet-id subnet-ccb76aaa –associate-public-ip-address

 

To launch an instance with user data

You can launch an instance and specify user data that performs instance configuration, or that runs a script. The user data needs to be passed as normal string, base64 encoding is handled internally. The following example passes user data in a file called script.txt that contains a configuration script for your instance. The script runs at launch.

# aws ec2 run-instances --image-id ami-abc1234 
--count 1 
--instance-type m4.large --key-name keypair 
--user-data file://script.txt --subnet-id subnet-abcd1234 
--security-group-ids sg-abcd1234

 

To use –generate-cli-skeleton with aws ec2 run-instances

1

[root@ip-10-0-1-205 ~]# aws ec2 run-instances –generate-cli-skeleton

2.Redirecty the output 

#aws ec2 run-instances --generate-cli-skeleton > ec2inst.json

3. Open the skeleton in a text editor and remove any parameters that you will not use:

4. Fill in the values for the instance type, key name, security group and AMI in your default region

5. Pass the JSON configuration to the --cli-input-json parameter using the file:// prefix:

$ aws ec2 run-instances --cli-input-json file://ec2inst.json
A client error (DryRunOperation) occurred when calling the RunInstances operation: Request would have succeeded, but DryRun flag is set.

AWS – ELB

AWS Elastic Load Balancing –  distributes incoming traffic across multiple Amazon EC2 instances:

You can use Elastic Load Balancing on its own, or in conjunction with Auto Scaling. When combined, the two features allow you to create a system that automatically adds and removes EC2 instances in response to changing load

 

Elastic Load Balancing supports two types of load balancers: Application Load Balancers (new) and Classic Load Balancers. Choose the load balancer type that meets your needs

  • An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each EC2 instance or container instance in your VPC.
  • A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and supports either EC2-Classic or a VPC.

 

Classic Load Balancer Overview

A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of
your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.

 

Classic Load Balancer Features

High Availability
Health Checks
Security Features
SSL Offloading
Sticky Sessions
IPv6 Support

Layer 4 or Layer 7 Load Balancing
Operational Monitoting

 

Steps to create an AWS – ELB (Clasic Load Balancer)

Define Load Balancer

Assign Security Group

Configure Security Settings

Configure Health Check

Add EC2 instances

Add Tags

Review

Load

 

Balancer Protocol :

HTTP

HTTPS ( Secure HTTP)

TCP

SSL( Secure TCP)

 

Cross-Zone Load Balancing 

Cross-Zone Load Balancing distributes traffic evenly across all your back-end instances in all Availability Zones.

Connection Drainning – The number of seconds to allow existing traffic to continue flowing

ELB can not stretch across region

– Before you start using Elastic Load Balancing, you must configure one or more listeners for your Classic Load Balancer.
A listener is a process that checks for connection requests. It is configured with a protocol and a port for front-end (client to load balancer)
connections, and a protocol and a port for back-end (load balancer to back-end instance) connections

By default, we’ve configured your load balancer with a standard web server on port 80.

 

You can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Elastic Load Balancers.

Route 53 will fail away from a load balancer if there are no healthy EC2 instances registered with the load balancer or if the load balancer itself is unhealthy.

Using Route 53 DNS failover, you can run applications in multiple AWS regions and designate alternate load balancers for failover across regions. In the event that your application is unresponsive, Route 53 will remove the unavailable load balancer endpoint from service and direct traffic to an alternate load balancer in another region

When you create a load balancer in your VPC, you can specify whether the load balancer is internet-facing (the default) or internal. If you select internal, you do not need to have an internet gateway to reach the load balancer, and the private IP addresses of the load balancer will be used in the load balancer’s DNS record.

 

Monitoring 

You can use the following features to monitor your load balancers, analyze traffic patterns, and troubleshoot issues with your load balancers and back-end instances

– CloudWatch metrics

Elastic Load Balancing provides the following metrics through Amazon CloudWatch

  • Latency
  • Request count
  • Healthy hosts
  • Unhealthy hosts
  • Backend 2xx-5xx response count
  • Elastic Load Balancing 4xx and 5xx response count

– CloudTrial Logs 

– Access Logs for your Classic Load Balancer

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer.

Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.

You can use these access logs to analyze traffic patterns and to troubleshoot issues.

  • Access logging is an optional feature of Elastic Load Balancing that is disabled by default
  • By default, these logs are actually turned off. But once we turn them on, they are automatically gathered and stored in Amazon S3. When you set up the logs or any time after that, you can change the interval at which the logs are taken to 5 minutes or 60 minutes.

Monitoring the Environment

One of the benefits of Elastic Load Balancing is that it provides a number of metrics through Amazon CloudWatch. While you are performing load tests, there are three areas that are important to monitor: your load balancer, your load generating clients, and your application instances registered with Elastic Load Balancing (as well as EC2 instances that your application depends on).

 

-Sticky sessions can only be enable with HTTP/HTTPS

  • ELB health check with the instances should be used to ensure that traffic is routed only to the healthy instances

Reference

https://aws.amazon.com/articles/1636185810492479

(the documentation at http://aws.amazon.com/documentation/cloudwatch/  – cloudwatch  metrics):

AWS – ElastiCache

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale in memory cache in the cloud . It supports two open-source in-memory caching engines:

  • Memcached
  • Redis

Redis – a fast, open source, in-memory data store and cache > Amazon Elastic cache for Redis is a Redis compatible in-memory  service that delivers the ease-of-use and power of Redis along with the availability, reliability and performance suitable for the most demanding applications.

 

Using Amazon ElasticCache, you can add an in-memory layer to your infrastructure.

AWS – IAM

AWS IAM enables you to implement security best practices, such as least privilege, by granting unique credentials to every user within your AWS account and only granting permission to access the AWS services and resources required for the users  to perform their jobs.

AWS IAM is secure by default; new users have no access to AWS until permission are explicitly granted .

IAM Role

An IAM role use temporary security credentials to allow you to delegate access users or services that normally don’t have access to your AWS resources

  • You can not change IAM role on a running EC2 instance
  • you can only associated a IAM role while launching EC2 instance

Permissions

  • managed policies can only be attached to IAM users, groups, or roles. You can not use them as resource-based policies

Temporary security credentials

  • temporary security credentials are sometimes simple referred to as tokens

Identity Federation

  • AWS supports the Security Assertion Markup Language (SAML) 2.0.
  • Federated users (non – AWS , external identities) are users you manage outside of AWS in your corporate directory, but to whom you grant access to your AWS account using temporary security credentials. They differ from IAM users, which are created and maintained in your AWS account.

– web identity federation

Multi-Factor Authentication (MFA)

Multi Factor Authentication – used as a second factor authentication to help secure Root and IAM user accounts

AWS MFA supports the use of both hardware tokens and virtual MFA devices .

Time -Based One_time Password – TOTP

Notes:

– The credentials report list all users in your account and the status of their various credentials, including passwords, access key, and MFA devices

– by default, when you createating a new user via the IAM console :

  • the user does not get notified of the creation via email
  • the user will be provisioned with Access Key
  • the user does not have access to any resources until they are specifically granted

Determining Whether a Request is Allowed or Denied

When a request is made, the AWS service decides whether a given request should be allowed or denied. The evaluation logic follows these rules:

  • By default, all requests are denied. (In general, requests made using the account credentials for resources in the account are always allowed.)
  • An explicit allow overrides this default.
  • An explicit deny overrides any allows.

reference

http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html

AWS – RDS

Amazon Relational Database Service (Amazon RDS) is a managed services that makes it easy to setup, operate, and scale a relational database in the cloud.

RDS events are published via AWS SNS and sent to you as an email or text message.

Amazon RDS gives you access to the capabilities of a familiar MySQL, MariaDB, Oracle, SQL Server, or PostgreSQL database

Amazon RDS, wit is multiple AZ feature, is designed to provide synchronous replica to keep data on standby nod e uo-to-date with primary

-Enhanced Monitoring captures your RDS instance system level metrics such as CPU, memory, file system and disk I/O among others .

You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console , Amazon RDS APIs, and Command Line Tools.

You can run one or more DB Instances , and each DB Instance can support one or more databases or databases schema, depending on engine type.

– Creating a DB snapshot on a single – AZ DB instance result in a brief I?O suspension that typically lasting no more than few minutes. Multi AZ DB instances are not affected by this I/O suspension since backup is taken on the standby.

 

– To launch an instance from a snapshot in a different region, you have to first copy the snapshot from the region where it was created
and stored, into the target region.

RDS Dashboard –> Instance  —> Select the RDS Instances —> Instance Actions —-> Take a Snapshot

After you take a snapshot

In the RDS console, from the origin region, choose “Snapshots,” then select the snapshot you want to copy, then click “Copy Snapshot.”
You will be given a choice of the destination region for the snapshot copy.

After the copy is complete, you’ll see the snapshot under “Snapshots” in the target region. From there, you should be able to use that snapshot

to create a new instance.

 

Another option now available is cross-region replication, which allows a live replica to be created in one region, from a master in a different region.

This is relevant, because it could be used for the same purpose of moving a master server to a different region. In this scenario, the master could be
migrated from one region to another with minimum downtime by first setting up a cross-region replica in the desired target region, and once the target
RDS instance had been created and synchronized to the master, you would disconnect the application from the old master, and then convert the new replica
in the new region into a standalone master server, by choosing “Promote Read Replica” from “Instance Actions” in the console, which would sever the
connection between the replica and its old master, and allow direct write access to it, since it would now be the new master.

Reserved Instances give you the option to reserve capacity within a datacenter and in turn receive a significant discount on the hourly charge for instances that are covered by the reservation. There are three RI payment options

— No UpfrontPartial Upfront, All Upfront

which enable you to balance the amount you pay upfront with your effective hourly price.

Backup

– Amazon RDS uses a default period retention period one day. You can modify the backup retention period; valid values are 0 ( for no backup retention) to a maximum 35 days.

  • The automated backup feature of Amazon RDS enables point-in-time recovery of your DB instance

-AWS offers automated backups of RDS

-Automated backups of RDS databases are deleted when an RDS instance is terminated