AWS – EFS

AWS – EFS

Amazon EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazon cloud

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.

Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.

Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

 

Amazon EFS uses the NFSv4.1 protocol

From AWS Console, go to EFS

Step 1 : Configure file system access
Step 2 : Configure optional settings
Step 3 : Review and create

An Amazon EFS file system is accessed by EC2 instances running inside one of your VPCs. Instances connect to a file system via a network interface called a mount target. Each mount target has an IP address, which we assign automatically or you can specify.

Create mount targets

Instances connect to a file system via mount targets you create. We recommend creating a mount target in each of your VPC’s Availability Zones so that EC2 instances across your VPC can access the file system.

Mount target – To access your file system, you must create mount targets in your VPC. Each mount target has the following properties: the mount target ID, the subnet ID in which it is created, the file system ID for which it is created, an IP address at which the file system may be mounted, and the mount target state. You can use the IP address or the DNS name in your mount command. Each mount target has a DNS name of the following form:

availability-zone.file-system-id.efs.aws-region.amazonaws.com 

On First EC2 Instance :

#yum install nfs-utils

Create a local directory ( e.g efs )

# mkdir efs

With mount command – mount the target you can use DNS or IP ( I use IP ) please see attached

#mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 172.30.zzyy:/ /efs

[root@ efs]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 992M 80K 992M 1% /dev
tmpfs 1002M 0 1002M 0% /dev/shm
/dev/xvda1 7.8G 1.2G 6.6G 15% /
/dev/xvdb 25G 2.5G 21G 11% /data
/dev/xvdh 79G 19G 56G 26% /data3
172.30.yy.zz:/ 8.0E 0 8.0E 0% /efs

[root@ efs]# ls -l /efs
total 16
drwxr-xr-x 2 root root 4096 Nov 14 20:44 data
drwxr-xr-x 2 root root 4096 Nov 14 20:43 data2
drwxr-xr-x 2 root root 4096 Nov 14 20:43 data3
drwxr-xr-x 2 root root 4096 Nov 14 19:09 test_efs

On second Ec2 instance
#yum install nfs-utils

mount target mount under local folder efs_2

#mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 172.30.yy.zz:/ /efs_2

[root@ip-172-30 efs_2]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 992M 68K 992M 1% /dev
tmpfs 1002M 0 1002M 0% /dev/shm
/dev/xvda1 7.8G 1.1G 6.7G 14% /
172.30.yy.zz:/ 8.0E 0 8.0E 0% /efs_2

[root@ /]# ls -l /efs_2/
total 16
drwxr-xr-x 2 root root 4096 Nov 14 20:44 data
drwxr-xr-x 2 root root 4096 Nov 14 20:43 data2
drwxr-xr-x 2 root root 4096 Nov 14 20:43 data3
drwxr-xr-x 2 root root 4096 Nov 14 19:09 test_efs

install boto

Boto is a Python package that provides interfaces to AWS including Amazon S3

boto – the AWS SDK for Python. Boto3 makes it easy to integrate your Python application, library, or script with AWS services including Amazon S3,

Amazon EC2, Amazon DynamoDB, and more.

[root@ip-172-…-126 ~]# pip list | grep boto
You are using pip version 6.1.1, however version 9.0.1 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
boto (2.42.0)
botocore (1.4.86)

[root@ip-172-30- ~]# pip install -U boto
You are using pip version 6.1.1, however version 9.0.1 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
Collecting boto
Downloading boto-2.43.0-py2.py3-none-any.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 354kB/s
Installing collected packages: boto
Found existing installation: boto 2.42.0
Uninstalling boto-2.42.0:
Successfully uninstalled boto-2.42.0
Successfully installed boto-2.43.0

 

pip is a package management system used to install and manage software packages written in Python. Many packages can be found in the Python Package Index (PyPI). Python 2.7.9 and later (on the python2 series), and Python 3.4 and later include pip (pip3 for Python 3) by default.

 

[root@ip-172-30 ~]# pip install –upgrade pip
You are using pip version 6.1.1, however version 9.0.1 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
Collecting pip
Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 372kB/s
Installing collected packages: pip
Found existing installation: pip 6.1.1
Uninstalling pip-6.1.1:
Successfully uninstalled pip-6.1.1
Successfully installed pip-9.0.1

 

 

 

/etc/boto.cfg
[root@ etc]# more boto.cfg
[Credentials]
aws_access_key_id = AKIA************************
aws_secret_access_key = oH7JxIljhY**************

 

 

simple script to upload a file to AWS S3

#!/usr/bin/python

import boto
from boto.s3.key import Key

keyId = “AKIA**************”
sKeyId= “eOCZ4********************”

fileName=”abcd.txt”
bucketName=”ovi-test”
file = open(fileName)

conn = boto.connect_s3(keyId,sKeyId)
bucket = conn.get_bucket(bucketName)

#Get the Key object of the bucket
k = Key(bucket)

#Crete a new key with id as the name of the file
k.key=fileName

#Upload the file
result = k.set_contents_from_file(file)
#result contains the size of the file uploaded

You can test if file was uploaded properly from aws cli

#aws s3 ls s3://ovi-test
PRE aws_doc/
PRE test/
2016-11-11 15:24:10 14 abcd.txt
2016-10-06 14:30:07 14 ovi2.txt
2016-10-06 12:01:16 13 test

AWS- clean up

to keep costs under control for AWS we will clean up below :

  • clean up idle EC2 Instance
  • clean up EBS Volumes which are unattached / all the volumes available to remove
  • clean up snapshots orphans / delete all Snapshots which are not in use
  • all Snapshots not in use by any AMI’s
  • Release elastic IPs when they’re not in use

 

 

#aws ec2 describe-regions | grep -i regionname
“RegionName”: “ap-south-1”
“RegionName”: “eu-west-1”
“RegionName”: “ap-northeast-2”
“RegionName”: “ap-northeast-1”
“RegionName”: “sa-east-1”
“RegionName”: “ap-southeast-1”
“RegionName”: “ap-southeast-2”
“RegionName”: “eu-central-1”
“RegionName”: “us-east-1”
“RegionName”: “us-east-2”
“RegionName”: “us-west-1”
“RegionName”: “us-west-2”

 

# aws ec2 describe-volumes > describe_volumes.txt

# aws ec2 describe-volumes –region us-west-1 > describe_volumes_us-west1.txt

# aws ec2 describe-snapshots > describe_snapshots

 

 

➜ # more describe_snapshots | grep -i SNAPSHOT | awk ‘{print $2}’ | sort | uniq | wc -l
15445

# aws ec2 describe-volumes –region us-west-1 | grep -i available
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,
“State”: “available”,

 

 

 

Reference :

aws clean up

http://www.robertsindall.co.uk/blog/how-to-clean-up-amazon-ebs-volumes-and-snapshots/

Detect useless Snapshots and Volumes in the Amazon EC2 Cloud

http://cloudacademy.com/blog/how-to-manage-ebs-volumes-snapshots-in-aws/

aws – Storage Gateway

gateway-cached volumes

Gateway-cached volumes allow you to utilize Amazon S3 for your primarily data, while retaining some portion of it locally in a cache for

frequently accessed data

 

gateway-stored volumes

gateway -stored volumes store your primary data locally, while asynchronously backing up data to AWS.

 

gateway-virtual tape Library

aws – metadata

Instance metadata is data about your instance tah you can use to configure or manage the running instance

[root@ip-10-192- ]# curl http://169.254.169.254/latest/meta-data/

ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-keys/
reservation-id
security-groups

services/

[root@ip-10-] curl http://169.254.169.254/latest/meta-data/ami-id ; echo
ami-de347abc

 

[root@ip-10-192-10]# curl http://169.254.169.254/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
2011-01-01
2011-05-01
2012-01-12
2014-02-25
2014-11-05
2015-10-20
2016-04-19
2016-06-30

root@ip-10-192]# curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key

Check public IP behind NAT

[root@ip-10-192- ]# wget -qO- http://ipecho.net/plain ; echo
50.18.yyy.yy

EBS – volume in linux

After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it

[root@ip-172-30… //]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   2G  0 disk

[root@ip-172-30 //]# file -s /dev/xvdf
/dev/xvdf: data

If the output of the previous command shows simply data for the device, then there is no file system on the device and you need to create

[root@ip-172-30-0-59 //]# mkfs -t ext4 /dev/xvdf
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 33193f80-886e-41ad-858e-6be5a4dde19e
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

after format, check again

[root@ip-172-30-//]# file -s /dev/xvdf
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=33193f80-886e-41ad-858e-6be5a4dde19e (extents) (large files) (huge files)

 

[root@ip-172-30- /]# ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root  80 Oct  4 14:16 .
drwxr-xr-x 7 root root 140 Oct  4 14:16 ..
lrwxrwxrwx 1 root root  10 Oct  4 14:16 33193f80-886e-41ad-858e-6be5a4dde19e -> ../../xvdf
lrwxrwxrwx 1 root root  11 Oct  4 14:17 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvda1

edit /etc/fstab

[root@ip-172-30-0-235 /]# cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/xvdf   /apps       ext4    defaults        0   0

 

create a directory apps

# mkdir  apps

#mount -a

test

 

[root@ip-172-30- /]# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8123812 3819192   4204372  48% /
devtmpfs          498816      60    498756   1% /dev
tmpfs             509664       0    509664   0% /dev/shm
/dev/xvdf        1998672    3076   1874356   1% /apps

 

With Amazon EBS encryption, you can now create an encrypted EBS volume and attach it to a supported instance type. Data on the volume, disk I/O,
and snapshots created from the volume are then all encrypted. The encryption occurs on the servers that host the EC2 instances, providing
encryption of data as it moves between EC2 instances and EBS storage. EBS encryption is based on the industry standard AES-256
cryptographic algorithm.
** Snapshots that are taken from encrypted volumes are automatically encrypted.
** Volumes that are created from encrypted snapshots are also automatically encrypted.

Public snapshots of encrypted volumes are not supported, but you can share an encrypted snapshot with specific accounts if you
take the following steps:

– Use a custom CMK, not your default CMK, to encrypt your volume.
– Give the specific accounts access to the custom CMK.
– Create the snapshot.
– Give the specific accounts access to the snapshot.

– You cannot snapshot an EC2 instance store volume.

AWS – CLI

Configure AWS CLI

[root@ip-10-0- ~]# aws configure
AWS Access Key ID [****************PMOQ]:
AWS Secret Access Key [****************k2Af]:
Default region name [None]: us-west-2
Default output format [None]:

 

AWS CLI structure 

$ aws <command> <sub command> [option and parameters]

Parameters can take various types of input values, such as number, string list, maps and JSON structures.

To verify that your CLI tools are set up correctly, run the following command:

# aws –version
aws-cli/1.10.8 Python/2.7.10 Linux/4.4.5-15.26.amzn1.x86_64 botocore/1.4.53

 

#ec2-describe-regions

 

[root@ip-10-~]# aws ec2 describe-volumes
You must specify a region. You can also configure your region by running “aws configure”.

[root@ip-10-0-1-205 ~]# aws ec2 describe-instances > output.json

[root@ip-10-0 ~]# aws ec2 describe-volumes –query ‘Volumes[*]{ID:VolumeId,InstanceId:Attachments[0].InstanceId,AZ:AvailabilityZone,Size:Size}’

[
{
“InstanceId”: “i-5496****”,
“AZ”: “us-west-2c”,
“ID”: “vol-267*****”,
“Size”: 100
},
{
“InstanceId”: “i-7ff****”,
“AZ”: “us-west-2b”,
“ID”: “vol-70a******”,
“Size”: 8
},

…………..

{
“InstanceId”: “i-f281****”,
“AZ”: “us-west-2a”,
“ID”: “vol-155****”,
“Size”: 8
}
]

 

 

create a bucket 

[root@ip-10-0-1…~]# aws s3api create-bucket –bucket ovia-bucket –region us-east-1

{
“Location”: “/ovia-bucket”
}

[root@ip-10-0-1-205 ~]# aws s3 ls
2016-08-19 22:10:56 cf-templates-9d1maiwyivrc-us-west-2
2016-09-21 01:50:45 ovia-bucket
2011-09-05 12:08:04 ovidiu
2016-01-23 14:33:51 ovidocs

copy a local file to s3

[root@ip-10-0-1-~]# aws s3 cp ovi.sh s3://ovia-bucket/ovi3.sh
upload: ./ovi.sh to s3://ovia-bucket/ovi3.sh

[root@ip-10-0-1 ~]# aws s3 ls s3://ovia-bucket
2016-09-21 03:09:57 31 ovi3.sh

copy a file from S3 to S3

[root@ip-10-0-1- ~]# aws s3 cp s3://ovia-bucket/ovi3.sh s3://ovidocs
copy: s3://ovia-bucket/ovi3.sh to s3://ovidocs/ovi3.sh

[root@ip-10-0-1-205 ~]# aws s3 ls s3://ovidocs
2016-09-21 03:12:34 31 ovi3.sh

[root@ip-10-0-1-205 ~]# curl http://169.254.169.254/latest/meta-data/block-device-mapping/
ami

create a ec2 instance via cli 

#aws ec2 run-instances –image-id ami-7172b611 –count 1 –instance-type t2.micro –key-name ovi –security-group-ids sg-ceb8eaaa –subnet-id subnet-ccb76aaa –associate-public-ip-address

 

To launch an instance with user data

You can launch an instance and specify user data that performs instance configuration, or that runs a script. The user data needs to be passed as normal string, base64 encoding is handled internally. The following example passes user data in a file called script.txt that contains a configuration script for your instance. The script runs at launch.

# aws ec2 run-instances --image-id ami-abc1234 
--count 1 
--instance-type m4.large --key-name keypair 
--user-data file://script.txt --subnet-id subnet-abcd1234 
--security-group-ids sg-abcd1234

 

To use –generate-cli-skeleton with aws ec2 run-instances

1

[root@ip-10-0-1-205 ~]# aws ec2 run-instances –generate-cli-skeleton

2.Redirecty the output 

#aws ec2 run-instances --generate-cli-skeleton > ec2inst.json

3. Open the skeleton in a text editor and remove any parameters that you will not use:

4. Fill in the values for the instance type, key name, security group and AMI in your default region

5. Pass the JSON configuration to the --cli-input-json parameter using the file:// prefix:

$ aws ec2 run-instances --cli-input-json file://ec2inst.json
A client error (DryRunOperation) occurred when calling the RunInstances operation: Request would have succeeded, but DryRun flag is set.

AWS – ELB

AWS Elastic Load Balancing –  distributes incoming traffic across multiple Amazon EC2 instances:

You can use Elastic Load Balancing on its own, or in conjunction with Auto Scaling. When combined, the two features allow you to create a system that automatically adds and removes EC2 instances in response to changing load

 

Elastic Load Balancing supports two types of load balancers: Application Load Balancers (new) and Classic Load Balancers. Choose the load balancer type that meets your needs

  • An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each EC2 instance or container instance in your VPC.
  • A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and supports either EC2-Classic or a VPC.

 

Classic Load Balancer Overview

A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of
your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.

 

Classic Load Balancer Features

High Availability
Health Checks
Security Features
SSL Offloading
Sticky Sessions
IPv6 Support

Layer 4 or Layer 7 Load Balancing
Operational Monitoting

 

Steps to create an AWS – ELB (Clasic Load Balancer)

Define Load Balancer

Assign Security Group

Configure Security Settings

Configure Health Check

Add EC2 instances

Add Tags

Review

Load

 

Balancer Protocol :

HTTP

HTTPS ( Secure HTTP)

TCP

SSL( Secure TCP)

 

Cross-Zone Load Balancing 

Cross-Zone Load Balancing distributes traffic evenly across all your back-end instances in all Availability Zones.

Connection Drainning – The number of seconds to allow existing traffic to continue flowing

ELB can not stretch across region

– Before you start using Elastic Load Balancing, you must configure one or more listeners for your Classic Load Balancer.
A listener is a process that checks for connection requests. It is configured with a protocol and a port for front-end (client to load balancer)
connections, and a protocol and a port for back-end (load balancer to back-end instance) connections

By default, we’ve configured your load balancer with a standard web server on port 80.

 

You can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Elastic Load Balancers.

Route 53 will fail away from a load balancer if there are no healthy EC2 instances registered with the load balancer or if the load balancer itself is unhealthy.

Using Route 53 DNS failover, you can run applications in multiple AWS regions and designate alternate load balancers for failover across regions. In the event that your application is unresponsive, Route 53 will remove the unavailable load balancer endpoint from service and direct traffic to an alternate load balancer in another region

When you create a load balancer in your VPC, you can specify whether the load balancer is internet-facing (the default) or internal. If you select internal, you do not need to have an internet gateway to reach the load balancer, and the private IP addresses of the load balancer will be used in the load balancer’s DNS record.

 

Monitoring 

You can use the following features to monitor your load balancers, analyze traffic patterns, and troubleshoot issues with your load balancers and back-end instances

– CloudWatch metrics

Elastic Load Balancing provides the following metrics through Amazon CloudWatch

  • Latency
  • Request count
  • Healthy hosts
  • Unhealthy hosts
  • Backend 2xx-5xx response count
  • Elastic Load Balancing 4xx and 5xx response count

– CloudTrial Logs 

– Access Logs for your Classic Load Balancer

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer.

Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses.

You can use these access logs to analyze traffic patterns and to troubleshoot issues.

  • Access logging is an optional feature of Elastic Load Balancing that is disabled by default
  • By default, these logs are actually turned off. But once we turn them on, they are automatically gathered and stored in Amazon S3. When you set up the logs or any time after that, you can change the interval at which the logs are taken to 5 minutes or 60 minutes.

Monitoring the Environment

One of the benefits of Elastic Load Balancing is that it provides a number of metrics through Amazon CloudWatch. While you are performing load tests, there are three areas that are important to monitor: your load balancer, your load generating clients, and your application instances registered with Elastic Load Balancing (as well as EC2 instances that your application depends on).

 

-Sticky sessions can only be enable with HTTP/HTTPS

  • ELB health check with the instances should be used to ensure that traffic is routed only to the healthy instances

Reference

https://aws.amazon.com/articles/1636185810492479

(the documentation at http://aws.amazon.com/documentation/cloudwatch/  – cloudwatch  metrics):

AWS – ElastiCache

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale in memory cache in the cloud . It supports two open-source in-memory caching engines:

  • Memcached
  • Redis

Redis – a fast, open source, in-memory data store and cache > Amazon Elastic cache for Redis is a Redis compatible in-memory  service that delivers the ease-of-use and power of Redis along with the availability, reliability and performance suitable for the most demanding applications.

 

Using Amazon ElasticCache, you can add an in-memory layer to your infrastructure.